Draft Minutes, Remote UI at IETF 63

Edited by Dean Willis from notes by Randall Gellens


Chair called to order with IPR notice and reminder to use new draft boilerplate and idnits tool.

Agenda reviewed and approved as posted.

Topic: Possible scope and deliverables, discussion led by chairs.
Slides presented

Question: : does "proposed deliverables" slide suggest we've already decided what to do?
Dean: no, it suggests what we might do if we decide to do something.

Vlad: "remote user interface" may no longer we appropriate name; we have narrowed our scope.


Topic: W3C work on related topics, led by Dave Raggett
Slides presented.

web limitations (visual, mouse, limited devices, etc.)
- overview of web (resources, languages/formats, protocols, etc.)
- separation of presentation from views, data model from ???, etc.
- the DOM: platform and language neutral APIs for manipulating XML documents with a rich model of events
- DOM enhancements (user prefs, device capabilities, configuration,  environmental conditions, access to remote services)
- layering APIs on protocols (DOM property w/ GET/SET methods)
- propagation of local events (e.g., device out of paper)
- speech, keypad, or pen for user input (cars)
- variety of output options
- dialog between user and application (expressed via markup,  client/server scripts)
- who is in control (user, wizards)
- xml data exchange between input processors and interaction mgmt systems
- processors annotate application specific data with confidence scores, timestamps, input/output modes, partial results, etc.
- no standards-based solutions yet; contributions on SALT and X+V
- W3C MMI WG developing architecture
- other protocols possible
- modality components communicate via asynch msgs (DOM may hide protocol)
- distributed DOM events (events signaling changes to user prefs, device config/capabilities, environmental conditions, shared data model)
- propagate changes to shared data model
- synchronize local copies of remote properties (ink levels in network printer, geo loc of device)

Conclusion: W3C should do markup, DOM, IETF do remote protocol


Topic: Proposed RemoteUI solution, discussion led by Vlad Stirbu
Slides presented

Remote UI  is intended to be a mechanism to allow user interface to be rendered
on separate device from application logic.
- compare to alternatives (framebuffer [VNC, RDP, Hot Desk],  graphics-level [x-windows])
- problem with these: UI is slave to logic; can't adapt to device or user or environment
- widely diverse device characteristics

UI descriptions (W3C markup languages; discovery session setup [mmusic])
- WiDeX Goal & Scope (specify an open platform-independent method for use in an IP network for user-visible objects)
- WiDeX requirements and assumptions

Question: what does "discovery and session setup independent" mean?
- Vlad: need to be able to use multiple mechanisms, e.g., SDP, zero-conf
- Dean: KPML uses SIP-session setup for discovery, then SIP to carry  events, but Remote UI may want to use other protocols and it might be desirable to use SIP for everything.  (Ed. note: see draft-stirbu-ordp-00)

MVC* Architecture overview
- MVC elements (model, view, controller)
- Remoting UI Concept
- MVC Architecture Over the Network
- WiDeX Framework Overview

Question: this looks like specifying ui behavior, not over-the-wire protocol
- Vlad: no, scope is limited to protocol; UI elements and behavior out of scope
- Comment: this gives context to protocol since IETF normally doesn't do UI
- Dean: need UI that can be described semantically; need to be able to  split into model/view/controller; need to be able to synchronize  events

Comment: document looks like it focuses on UI; should rename document "how  to keep DOMs synchronized"

Comment: use of "client" and "server" is confusing, doesn't look like x-windows terms
- Dean: no, this is web-server terms

Comment: saying session setup is out of scope is not right
- Vlad: maybe so

Comment : scope is keeping state synchronized, after session has been setup
- Vlad: you are doing synchronization, but you don't care how UI is represented

Comment: "keeping ui in sync" keeps being said; better to focus on keeping  DOM in sync

Chair: We seem to have a scoping proposal to focus on keeping DOMs in sync.

Comment: session setup happens before synchronization, may need separate protocol;  my web page says 'here is URL for my pvi; you can click but you don't  know what widgets will be sent
-  protocol may not care what widgets are sent

Comment: W3C will provide framework for (Ed: missed here, but presumably "widget definition")

Comment: Vlad was talking about need for protocol that can be used for  several things; W3C folks want to aim for DOM; to do something general purpose you need at least one specific thing it is good for,  otherwise it is likely good for nothing; WG should have specific goal  to make DOM object synchronization work, but it should not make a  protocol that is limited to this.

Comment: negotiation-then-transport is like RTP (SDP then payload formats);  important that this be specified someplace

Comment: session startup discussion is confusing; best guess is that session initiation is a sequence that eventually sends payload; URI reference is a way to start; at some point thing doing exchange and update has to start by negotiating; that phase has to take place as part of this protocol.

Comment: device characteristics versus large database are PFM as far as this group is concerned; can think of at least 5 mechanisms (SDP,  RTSP, etc.); we don't need yet another one

Dean: how does synchronizing two XML documents differ from SIMPLE presence distribution?

Question: unclear on intent
-  Vlad: you are synchronizing two DOM objects; don't need to have stateful devices or applications; only application logic is
maintaining state of system; everything is related to application  logic; UI is representation of that

Comment (Crocker): I'm jet lagged; please raise hand if you believe you understand scope of work
- a number of people raised hands, perhaps 20% of room, questioner seems impressed.

Comment: I'm finishing my PhD in pretty much this, so yes; I'd suggest that a document that described how to use this with mobile phones with SIP  would be needed; this does need to be used for something; this group needs to explain how it works
- Vlad: SIP in Minneapolis concluded that SIP/SDP needs protocol to show what needs to be synchronized

Comment: I think I understand chewy center of scope; unclear where hard candy boundary is (limits of scope, only core of scope is clear);  which parts of service discovery get included; what are other ends of  protocols; the more general this gets the harder it is to understand;  pick something specific (e.g., XML documents) and keep that in focus;  otherwise scope gets out of hand

Comment: client need not be stateful -- if true, example is video; don't  need to sync back to server that you rec'd video; may have to sync when formats change so client and server are on same page; so client needs to be stateful
- Dean: video with streams is an important issue; because we are using UI descriptions to instantiate widgets that interact with the application. We could use embedded widgets to play video;  send events when video is completed.
- Comment: you just described a state machine
- Dean: Then I suppose clients may have state.

Comment: recommend not doing general solution that has one specific solution; instead do specific solution that can be used for other things

(back to slides)

Questions
 - MVC right for WiDeX?
 - more fine-grained UI exhange msgs?
 - Beep right protocol?)

Dean: this draft is so lightweight there's not really any protocol there.
Chris: like SASL

Comment: synchronizing client state with server has been done before; IMAP is one; ACAP is another that had a much simpler data model than XML;  caution since we've failed before.  BEEP sounds like slam dunk since  you're using XML and need multiple channels

Comment: XMPP may be right; doesn't have multiple channels of BEEP

Comment: could use on top of SIP events

Comment: what is interoperation point?  If it is network for lots of  configuration points, there are lots of network elements that talk to each other; that's not what we're doing here; classic way to do this is write application, split into two pieces (client and server); use
private protocol between them; why do you need to expose nature of  content?
- Vlad: you don't expose nature of content

Comment: this problem is in application domain; don't need a network protocol
- Dean: assumption that client is independent of application; protocol definition between two elements to enable application without having to download software to understand UI elements

Dean (as chair) asks how many optimists in room?  If there work is here for IETF, please raise hand (a large number do)
Dean asks to raise hand if you think this is out of scope of IETF (a few hands go up)
Dean asks what we can do to better understand problem so all hands on are same side; need architecture model and maybe requirements document

Comment: my hand was half-up; it isn't clear that we have bounds; we need to charter-bash and see if we can get reasonable bounds

Scott (AD): mailing list (remoteui@ietf.org) is available; we need to refine scope and work on boundaries; we need to do this on the
mailing list; next step is to put together charter to define goals

Dean: we'll continue with charter discussion on mailing list

Comment: why would I use this instead of, e.g., Sun's JINI?
Dean: I had same question; objection description in JINI is a Java program; much worse security implications, larger size, need full
JVM, etc.  We need a semantic model, no arbitrary execution of code, limited execution, no JVM, etc.

Comment: would be helpful to have requirements from W3C; there are a lot of really cool and nifty things we could do that no one would care about; a goals document would help