What sort of recall tasks are you talking about? What sort of training procedure? What sort of hint? And how is "remembering" operationalized? Does it make successful predictions about human memory performance?
"recall" just means converging on a particular state given different initial states, where a state is the set of activation statuses of the nodes. For example if you have a network of 50 nodes, and a recall state has subset A active and B not active, you would hope the network would converge to that state eventually even if you put it in an initial state where all 50 nodes are active.
The training procedure involves finding the state you want to be a recall state and minimizing the "energy" of the network when it is in that state by modifying the edge weights. I am not very well versed in the procedure but it looks like wikipedia gives a decent overview, and apparently it is a common model used all over the place in computing and thermodynamics (the "Ising" model, whatever that is).
By "hint" I meant putting the network in an initial state that is closer to the correct recall state in the state space than it is to other "incorrect" recall states. Conceptually, the recall states are all local minima -- so a "hint" is just making the initial state somewhere within the range of states that converge to the correct local minima.
I am not sure what you mean by "operationalized" but I assume you mean "how would it be hooked up to something to make it actually work?" That is tricky and I don't have all the answers, since I think most research has been done regarding ANNs where the edge weights of the network are changed by whoever is directing the algorithm (which might be a computer, but nonetheless it still isn't the network itself).
My intuition tells me that since we know synapses are plastic with regard to their strength, and whether they are excitatory or inhibitory, there must be a mechanism that somehow reinforces synapse "weight" when some state is strong enough to be a "memory." In other words, some biological mechanism that mimics what happens when a Hopfield ANN is trained. Since as I said the model involved in training ANNs is very common in thermodynamics, I don't think it is that implausible that something analagous might be going on in the brain.
As for setting the network in a state, and/or reading the state, well we already can do that with ANNs. It just requires clever network topology. For example I could wire up some perceptrons to filter an image, direct the output of the perceptron to a hopfield net, wait for the hopfield net to converge with a clever ANN that somehow measures the activity of the hopfield net, and then read the hopfield net to another network and do whatever I want with the information -- all using neural networks. I could even have the input network be the same as the output one, and use inhibitory edges to keep the output from "reading" until the hopfield net had settled.
Now, regarding human predictions -- I dunno. I don't think we know quite enough about the hyppocampus (or hypothalamus, whatever it is) to figure out the number of neurons available, and we certainly don't know how "close" to a hopfield net arrangement they are, if at all.
I also don't know if it is possible to perform the kind of experiments you referenced and have the results be meaningful in this context -- because we don't know if human memory is a single network, or a recurrent network or subnetworks, or a parallel set of networks, etc. My opinion is that trying to come up with a simple model and check the math by performing psychological experiments where the subject is tested on recall isn't going to be very illuminating because it necessarily glosses over a huge amount of complexity -- in other words, it is like modelling the space shuttle using the simple rocket exhaust momentum equation we learn about in first year physics.
But there
are predictions that don't involve "math" per se. For example, we know that it is very difficult to recall anything to do with "time" using a hopfield net. To do it would require a whole bunch of extra networks and it would be pretty crazy. And do humans do well with time periods? No, not at all -- humans recall events, and the order of events. The duration of the events is not part of our recall. For example a human will find it difficult to say whether a given time period was 15 minutes or 25 minutes, only that the latter period was longer than the first, and even then the circumstances might lead to the human being wrong.
Other examples are what I suggested at in the initial hopfield net post -- memories being just out of reach, memories being polluted by very similar memories, etc.
Like I said I would really love to tinker with these things in some kind of a game framework and start building ANNs that control virtual creatures. It would be a great test.