On Tue, 2011-08-16 at 11:38 +0200, Lukas Zeller wrote:
On Aug 16, 2011, at 9:06 , Patrick Ohly wrote:
>> On Aug 3, 2011, at 16:31 , Patrick Ohly wrote:
>>> [...] Is there some way to force UID comparison for added items even in a
>>> normal, incremental sync?
>> Something like a slow sync match on UIDs for adds, to convert them to updates?
>> That's something that could be added to the engine if we add UID-based
>> matching support natively (would make sense, but also quite some work
>> to get it right).
> Would it be simpler if it was done without native UID-based matching?
> Perhaps by specifying that a comparescript also needs to be run for
It would be a run trough the entire sync set. Which is never available
in a searchable form in the client side (and I don't have a reasonably
quick and safe idea how to change that), and in the server it is only
loaded in slow syncs - during normal syncs only changes are loaded. Of
course that could be changed, but I doubt it's worth the effort -
because getting it right without breaking some of the more complicated
features like filtering will be hard work...
>> However I once had a similar case, where a backend would do
>> implicit merge instead of an add based on the contents (wasn't the
>> UID, but that doesn't matter) of the to-be-added entry.
>> For that we added the possibility for the plugin to return
>> DB_DataMerged (207) when adding the item caused some sort of merge in
>> the backend.
> SyncEvolution already does that. But because it can't do a real merge of
> the data, some information is lost.
Why can't it do a real merge?
Merely for the practical reasons that you mentioned - there's no code
which does it. It could be implemented, but right now SyncEvolution is
not capable of such merging. It would duplicate functionality of the
Synthesis engine, so we really should get libvxxx ready for such use.
>> This causes the engine to read the item back from the
>> having added it, assuming to get back the merged data, which is then
>> sent back to the client in the same session.
>> This only works server-side, because on the client there is no way to
>> send updates after having received server updates (must wait for the
>> next session).
>> Your backend would need to issue a delete for the other item in a later session
to clean up.
> SyncEvolution is not doing that. Can you elaborate?
> What happens is this, from the backend's perspective:
> * Item with local ID FOO exists.
> * Backend asked to add new item.
> * Backend detects that the new item is the same as the existing
> item, returns 207 and local ID FOO.
> * In the next sync, the item with ID FOO is reported as
> "unchanged". Nothing is said about the other item because as far
> as the backend is concerned, it doesn't exist and never has.
But yes, when the sync started, the backend should have reported the
original item as a new one (before the merge occurred).
Not necessarily as new. In most cases it will be unchanged. Only in the
case of the add<->add conflict will it be new. The engine will know
about it one way or the other, though.
So for the engine, the situation would be that
* item with localID FOO exists
* item with remotedID BAR is coming in as an add
* backend merges it with existing item and returns localID FOO with status 207
Agreed. This is what happens right now already. What I don't understand
is what the backend should be doing differently. You said "need to issue
a delete for the other item" - which item? The backend only knows about
"FOO", which continues to exist. It is never passed the "BAR" remote
Now in the server case, the merge occurs BEFORE changes are sent to
the client. So the server should probably
* read back the item with localID FOO to get the result of
the merge (that's what actually happens already)
* search the list of items to-be-sent to the client for
another item with the same localID. That would be the
item reported as an add to the engine BEFORE the merge
Only if the item really was new locally.
* if one is found, and it is an add, remove that item from
the to-be-sent list to avoid to ever create a duplicate
at the client's end
* if one is found, and it is a replace for a already mapped
remoteID, convert it to a delete. After all, it could
well be that the sequence of events is:
* invitation added to server
* sync -> new item gets added to client
* invitation added to client (dumb, not detecting
* sync -> client side version gets added to server,
detected as duplicate.
All but the first bullet point are not implemented so far.
I had to resolve the double negation before this sentence made sense to
me ;-) So the "read back" bullet item is implemented, the rest isn't.
>> Overall, I'm not sure where these types of cross-item
>> conceptually. I guess with series and exceptions such a merge could
>> easily get more complicated and involve multiple items, so I wonder if
>> that should better be implemented outside the SyncML scope, like a
>> "robot user" who is editing the database between syncs on one end or
>> the other.
> I think it fits into the engine. The Funambol server has worked in this
> mode (always do mandatory duplicate checking on adds) for a long time. I
> have my doubts whether it is always the right choice (for example, one
> cannot add "similar" contacts that the server deems identical), but for
> iCalendar 2.0 UID it would be.
I see that the actions to be taken after a backend-detected merge make
sense to be added to the engine, as outline above.
However, actually detecting and merging a duplicate belongs into the
backend, as the search usually must extend beyond what the engine sees
as "current sync set" (imagine a date-range filtered sync, and an
invitation added on both sides which is to far in the future to be in
the date range window. The candidate for a merge could not be found in
the sync set!).
Best Regards, Patrick Ohly
The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.