Out of order loading in concurrent environment
Below is a snippet from Joe Duffy's book (Concurrent Programming on Windows) followed by the piece of code to which that paragraph relates to. That piece of code is meant to work in concurrent environment (used by many threads) where this LazyInit<T>
class is used to create am object that is initialized only when the value (of type T) is really needed to be used within the code.
I would appreciate if someone can elaborate about the step-by-step scenario whereby the out of order load-to-load can create a problem. That is, how could two or more threads using that class and assigning a reference and its fields to variables can be a problem if the order of loading per one of the threads was load the fields first and then the reference rather than how we would expect it to be (load the reference first and then the fields' values that are gotten through the reference) ?
I understand that it's pretty rare to happen (failure because of out of order loading). In fact I can see that one thread can incorrectly read the fields' values first without knowing what the reference value (pointer ?) is, but if that were to happen, then that thread would correct itself(just as if it were not in a concurrent environment) if it noticed that the premature load value is incorrect; in that case the loading would eventually be successful. In other words how could the the presence of another thread make the loading thread not to 'realize' that the the out-of order loading in the loading thread is invalid?
I hope I managed to convey the problem as I really see it.
Snippet:
Because all of the processors mentioned above, in addition to the .NET memory model, allow load-to-load reordering in some circumstances, the load of m_value could move after the load of the object's fields. The effect would be similar and marking m_value as volatile prevents it. Marking the object's fields as volatile is not necessary because the read of the value is an acquire fence and prevents the subsequent loads from moving before, no matter whether they are volatile or not. This might seem ridiculous to some: how could a field be read before a reference to the object itself? This appears to violate data dependence, but it doesn't: some newer processors (like IA64) employ value speculation and will execute loads ahead of time. If the processor happens to guess the correct value of the reference and field as it was before the reference was written, the speculative read could retire and create a problem. This kind of reordering is quite rare and may never happen in practice, but nevertheless it is a problem.
Code example:
public class LazyInitOnlyOnceRef<T> where T : class
{
private volatile T m_value;
private object m_sync = new object();
private Func<T> m_factory;
public LazyInitOnlyOnceRef(Func<T> factory) { m_factory = factory; }
public T Value
{
get
{
if (m_value == null)
{
lock (m_sync)
{
if (m_value == null)
m_value = m_factory();
}
}
return m_value;
}
}
}
Some newer processors (like IA64) employ value speculation and will execute loads ahead of time. If the processor happens to guess the correct value of the reference and field as it was before the reference was written, the speculative read could retire and create a problem.
This essentially corresponds to the following source transformation:
var obj = this.m_value;
Console.WriteLine(obj.SomeField);
becomes
[ThreadStatic]
static object lastValueSeen = null; //processor-cache
//...
int someFieldValuePrefetched = lastValueSeen.SomeField; //prefetch speculatively
if (this.m_value == lastValueSeen) {
//speculation succeeded (accelerated case). The speculated read is used
Console.WriteLine(someFieldValuePrefetched);
}
else {
//speculation failed (slow case). The speculated read is discarded.
var obj = this.m_value;
lastValueSeen = obj; //remember last value
Console.WriteLine(obj.SomeField);
}
The processor tries to predict the next memory address that is going to be needed to warm the caches.
Essentially, you can no longer rely on data dependencies because a field can be loaded before the pointer to the containing object is known.
You ask:
if (this.m_value == lastValueSeen) is really the statement by which prdeiction (based on the value see last time per m_value) is put to the test. I understand that in sequential programming (non concurrent), the test must always fail for whatever value was last seen, but in concurrent programming that test (the prediction) could succeed and the processor's flow of execution will ensue in an attempt to print invalid value (i..e, null someFieldValuePrefetched)
My question is how could it be that this false prediction could succeed only in concurrent programming but not in sequential, non-concurrent programming. And in connection to that question, in concurrent programming when this false prediction is accepted by the processor, what are the possible value of m_value (ie, must it be null,non null) ?
Whether the speculation works out or not does not depend on threading, but on whether this.m_value
is often the same value as it was on the last execution. If it changes rarely, the speculation often succeeds.
First, I must say that I really appreciate your help in this matter. In order to hone my understanding, here is how I see it and please correct me if I am wrong .
If thread T1 were to execute the incorrect speculative load path, the following lines of code will be executed:
Thread T1 line 1: int someFieldValuePrefetched = lastValueSeen.SomeField; //prefetch speculatively
Thread T1 line 2: if (this.m_value == lastValueSeen) {
//speculation succeeded (accelerated case). The speculated read is used
Thread T1 line 3: Console.WriteLine(someFieldValuePrefetched);
}
else {
//speculation failed (slow case). The speculated read is discarded.
…..
….
}
On the other hand, thread T2 will need to execute the following lines of code.
Thread T2 line 1: old = m_value;
Thread T2 line 2: m_value = new object();
Thread T2 line 3: old.SomeField = 1;
My first question is: what is the vlue of this.m_value when “Thread T1 line 1” is executed? I suppose it's equal to the old m_value before “Thread T2 line 2” was executed, correct ? Otherwise, the speculative branch would NOT have picked the accelerated path That leads me to ask whether thread T2 MUST also execute its lines of code in out of order fashion. That is, does it execute “Thread T2 line 1”,” Thread T2 line 3”,” Thread T2 line 2” rather than “Thread T2 line 1”,” Thread T2 line 2”,” Thread T2 line 3” ? If so, then I believe that the volatile keyword also prevents thread T2 to execute code in an out of order fashion, correct ?
I can see that thread T1's “Thread T1 line 2” were to execute after thread T2's “Thread T2 line 1” and ”Thread T2 line 3” and before “Thread T2 line 2” , then SomeField in thread T1 will be 1 even though that would not have made sense, as you noted, because when SomeField becomes 1, m_value is assigned a new value which would have a value of 0 for SomeField
If it is still actual, consider the following code, it's from CPOW by Joe Duffy:
MyObject mo = new LazyInit<MyObject>(someFactory).Value;
int f = mo.field;
if (f == 0)
{
//Do Something...
Console.WriteLine(f);
}
the following text is also from the book "If the period of time between the initial read of mo.field into variable f and the subsequent use of f in the Console.WriteLine was long enough, a compiler may decide it would be more efficient to reread mo.field twice.... compiler might decide this if keeping the value would create register pressure, lead to less efficient stack space usage :
...
if (mo.field == 0)
{
////Do Something...
Console.WriteLine(mo.field);
}
So, I think it might be a good example of retired ref. By the time of the subsequent usage of mo.field the speculative read of mo could retire and create a null ref exception which is definitely a problem.
链接地址: http://www.djcxy.com/p/76674.html上一篇: 通过NFC将Android的URL发送到Windows Phone可以播放Play商店链接
下一篇: 在并发环境中乱序加载