Caveat: This project is very much in progress, and will likely take a year or more. The motivation behind it, building memory models for machine learning, is itself a very long-term research project. If I ever pursue a PhD, this is the area I would choose.
In this post I will bring to bear years of study in machine learning, probability, graph theory, and psychology/neuroscience to explore the future of memory structures for machine intelligence. In particular, memory structures of interest are those that enable humanlike flexibility in the face of long-term and novel tasks. Memory with attributes such as temporal and sequential addressing, categorical structure, long-term fidelity coupled with updatability, associative linking, an analog to consolidation between short-term memory and long-term memory, cue based recall, free recall, and... the list goes on. As mentioned, this is a large project. I'm currently studying the work of Josh Tenenbaum and others on the subjects of probabilistic programming and program induction. Related to these are probabilistic graphical models and model building. Some readings of interest are:
Building Machines That Learn and Think Like People
Probabilistic Graphical Modeling - Stanford CS 228