sorry for "vacation mode" and not keeping up with my inbox these days...
On Aug 3, 2011, at 16:31 , Patrick Ohly wrote:
Suppose the same meeting invitation for event UID=FOO is processed
both Evolution and Google Calendar. On the Evolution side, the
invitation is accepted. In Google Calendar this is still open.
Both sides now have a new item when syncing takes place.
What happens is that the engine itself doesn't recognize that the two
new items are in fact the same. It asks both sides to store the others
item. The SyncEvolution EDS backend recognizes the UID and updates the
item, but without merging. The PARTSTAT=ACCEPTED is overwritten with
Google's PARTSTAT=NEEDS-ACTION. At the same time, Google's version of
the items similarly overwritten.
After modifying the event series in Evolution it is sent as update to
Google, at which point the correct PARTSTAT is lost everywhere.
Is there some way to force UID comparison for added items even in a
normal, incremental sync?
Something like a slow sync match on UIDs for adds, to convert them to updates?
That's something that could be added to the engine if we add UID-based matching
support natively (would make sense, but also quite some work to get it right).
However I once had a similar case, where a backend would do an implicit merge instead of
an add based on the contents (wasn't the UID, but that doesn't matter) of the
For that we added the possibility for the plugin to return DB_DataMerged (207) when adding
the item caused some sort of merge in the backend. This causes the engine to read the item
back from the backend after having added it, assuming to get back the merged data, which
is then sent back to the client in the same session.
This only works server-side, because on the client there is no way to send updates after
having received server updates (must wait for the next session).
Your backend would need to issue a delete for the other item in a later session to clean
Overall, I'm not sure where these types of cross-item merges belong conceptually. I
guess with series and exceptions such a merge could easily get more complicated and
involve multiple items, so I wonder if that should better be implemented outside the
SyncML scope, like a "robot user" who is editing the database between syncs on
one end or the other.
Because only a few cases can be folded into a single sync, many scenarios will require a
second sync to settle anyway. So maybe smarter SyncML peers could auto-initiate a second
sync immediately when detecting 207 status codes in a session, such that for end users the
consolidation seemingly happens in one step. Even unaware peers would eventually settle
correctly, but not immediately.
Just my (late night) thoughts...