What (will) actually happen initially more than likely is that this thread is awoken each time any variable in any of the threadLock objects is modified and the condition retested. This might be inefficient.When the condition evaluates to T, the call returns and the lock is held.
If the argument is a string, it is taken to be the name of a variable in this threadLock object. The thread requesting the lock is notified of any assignments to this variable, whether it exists or is being created. This is used by a thread to be arranged to be notified of any changes to a particular variable.
If the argument is a vector of character strings, the above applies if any of the variables is modified.
We should think about signalling removals, but this is an extreme case.
The purpose of this is to test the values of variables which are shared across threads. The test must be protected to ensure no thread modifies the variables during the evaluation of the expression and also that the variables have consistent values.
This is different from Pthreads in which
the thread must explicitly call pthread_mutex_lock()
and pthread_mutex_unlock() and have a call to
pthread_cond_wait() inside the lock. So code here is
asymmetric.
a = threadLock("a", n = 1)
b = threadLock("b", i = 1)
i == 0 && n == 1
Even if these objects are in the same threadLock object,
we might have problems.
Imagine the scenario where i is retrieved and a copy returned.
Then a second thread modifies i. Then n is retrieved
and the condition evaluated. At this stage, we don't have consistency and the condition
might/will be erroneously evaluated.
Instead, we must lock access to i and n. So
we lock a and b. Now this may cause grief.
So we would have
getLock(a); getLock(b)
i == 0 && n == 1
yieldLock(b); yieldLock(a)
(Note the order in which we acquire and release the locks - a good thing in general).
Consider the case where one (or more) of these conditions is false, say i == 1.
Then the condition evaluates to F.
The best way to arrange these computations is to have n and i in a single threadLock object together. Of course, if we have numerous conditions to test, each comparing partially insersecting sets of variables (e.g. {a,b}, {b,c}, {c,d}, {a,b}), then this threadLock object becomes quite large and all assignments and 'get's to it cause coarse locking. This is a performance penalty on the parallelism of the threads and might be complicated.
Alternatively, we could have
getLock(...,condition,safe=F)
where the ... argument is for an arbitrary collection of
threadLock objects. Then, we get the lock
on all of these, and when the condition is false, we release them all
and iterate.
Now, in this scenario, we would code things as
getLock(a,b, condition = Quote(i == 0 && n == 1))
Now if i == 1, this is false, and so we release
the lock on both a and b.
Now another thread modifies n say, setting it to 3.
Then we evaluate the expression again by grabbing the locks on a and b.
Then, we test again. And again find this to be false.
Now, suppose the other thread sets n back to 1 and "simultaneously"
i to 1.
Of course to do this, the other thread had to acquire the lock on both a
and b. If this condition thread was waiting, it might still be waiting.
Eventually, suppose it gets the locks and tests the condition. Now it is true, so it returns.
This reeks of the same style of automatically syncrhonizing input and output variables. We can parse the conditions and see what variables are involved and put them in the relevant databases.
| ||
|---|---|---|
|
|
| |
# Do some computations for this thread.
..
..
threadLock.assign(lock,"finished",T)
|
getLock(lock, condition = Quote(finished == T))
# now wrap up the computations for this thread
| |