LREC 2004 Workshop on Multimodal Corpora : 2nd and final CFP
Jean-Claude MARTIN <Jean-Claude.Martin <at> limsi.fr>
2004-01-06 13:29:05 GMT
This message is posted to several lists.
We apologize if you receive multiple copies.
Please forward it to everyone who might be interested.
SECOND AND FINAL CALL FOR PAPERS
MODELS OF HUMAN BEHAVIOUR
FOR THE SPECIFICATION AND EVALUATION
OF MULTIMODAL INPUT AND OUTPUT INTERFACES
Centro Cultural de Belem, LISBON, Portugal, 25th may 2004
In Association with
4th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
LREC2004 http://www.lrec-conf.org/lrec2004/index.php Main conference
26-27-28 May 2004
The primary purpose of this one day workshop is to share information
and engage in the collective planning for the future creation of usable
pluridisciplinary multimodal resources.
It will focus on the following issues regarding multimodal corpora:
how researchers build models of human behaviour out of the annotations
of video corpora,
how they use such knowledge for the specification of multimodal input
(e.g. merging users' gestures and speech )
and output (e.g. specification of believable and emotional behaviour in
Embodied Conversational Agents) in human computer interfaces,
and finally how they evaluate multimodal systems (e.g. full system
evaluation and glass box evaluation of individual
Topics to be addressed in the workshop include, but are not limited to:
* Models of human multimodal behaviour in various disciplines
* Integrating different sources of knowledge (literature in
socio-linguistics, corpora annotation)
* Specifications of coding schemes for annotation of multimodal video
* Parallel multimodal corpora for different languages
* Methods, tools, and best practice procedures for the acquisition,
creation, management, access, distribution, and use of multimedia and
* Methods for the extraction and acquisition of knowledge (e.g. lexical
information, modality modelling) from multimedia and multimodal corpora
* Ontological aspects of the creation and use of multimodal corpora
* Machine learning for and from multimedia (i.e., text, audio, video),
multimodal (visual, auditory, tactile), and multicodal (language,
graphics, gesture) communication
* Exploitation of multimodal corpora in different types of applications
(information extraction, information retrieval, meeting transcription,
translation, summarisation, www services, etc.)
* Multimedia and multimodal metadata descriptions of corpora
* Applications enabled by multimedia and multimodal corpora
* Benchmarking of systems and products; use of multimodal corpora for
the evaluation of real systems
* Processing and evaluation of mixed spoken, typed, and cursive (e.g.,
pen) language processing
* Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)
* Techniques for combining objective and subjective evaluations, and
for making evaluations cost-effective, predictive and fast
The output of the workshop will be the following:
* Better knowledge of the potential of major models of human multimodal
* Challenging issues in the usability of multimodal corpora
* Fostering of a pluridisciplinary community of multimodal researchers
and multimodal interface developers
Multimodal resources feature the recording and annotation of several
such as speech, hand gesture, facial expression, body posture, graphics.
Several researchers have been developing such multimodal resources for
often with a focus on a limited set of modalities or on a given
A number of projects, initiatives and organisations have addressed
multimodal resources with a federative approach:
* At LREC2002, a workshop had addressed the issue of "Multimodal
Resources and Multimodal Systems Evaluation"
* At LREC2000, a 1st workshop had addressed the issue of multimodal
corpora, focussing on meta-descriptions and large corpora
* The European 6th Framework program (FP6), started in 2003, includes
multilingual and multisensorial
communication as one of the major R&D issue, and the evaluation of
technologies appears as a specific
item in the Integrated Project instrument presentation
* NIMM was a work group on Natural Interaction and MultiModality which
ran under the IST-ISLE project
(http://isle.nis.sdu.dk/). In 2001, NIMM compiled a survey of existing
(more than 60 corpora are described in the survey), coding schemes and
The ISLE project was developed both in Europe and in the USA
* EcorporaA (European Language Resources Association) launched in
November 2001 a
survey about multimodal corpora including marketing aspects
* A Working Group at the Dagstuhl Seminar on Multimodality recorded, in
28 questionnaires from researchers on multimodality, from which 21
have been announcing their
attention to record other multimodal corpora in the future.
* Other surveys have been recently made about multimodal annotation
coding schemes and tools (COCOSDA, LDC, MITRE).
Yet, existing annotation of multimodal corpora until now have been made
mostly on an individual basis,
each researcher or team focusing on its own needs and knowledge about
modality specific coding schemes
or application examples.
Thus, there is a lack of real common knowledge and understanding of how
to proceed from annotations
to usable models of human multimodal behaviour and how to use such
for the design and evaluation of multimodal input and embodied
conversational agent interfaces.
Furthermore, the evaluation of multimodal interaction poses different
(and very complex) problems than the evaluation of monomodal speech
WYSIWYG direct interaction interfaces.
There are a number of recently finished and ongoing projects in the
field of multimodal interaction
in which attempts have been made to evaluate the quality of the
interfaces in all meanings
that can be attached to the term 'quality'.
There is a widely felt need in the field for exchanging information on
interaction evaluation with researchers in other projects.
One of the major outcomes of this workshop should be better
the extent to which evaluation procedures developed in one project
generalise to other, somewhat related projects.
* 24 January 2004: Deadline for paper submission
* 29 February 2004: Acceptance notifications and preliminary program
* 21 March 2004: Deadline final version of accepted papers
* 25 May 2004: Workshop
The workshop will consist primarily of paper presentations and
Submissions should be 4 pages long, must be in English, and follow the
submission guidelines at http://lubitsch.lili.uni-bielefeld.de/MMCORPORA
Demonstrations of multimodal corpora and related tools are encouraged as
well (a demonstration outline of 2 pages can be submitted).
As soon as possible, authors are encouraged to send to
lrec <at> limsi.u-psud.fr
a brief email indicating their intention to participate, including their
contact information and
the topic they intend to address in their submissions.
Proceedings of the workshop will be printed by the LREC Local Organising
The organisers might consider a special issue of a suitable Journal for
selected publications from the workshop.
TIME SCHEDULE AND REGISTRATION FEE
The workshop will consist of a morning session and an afternoon session,
with a focus on the use of multimodal corpora for building models of
human behaviour and
specifying/evaluating multimodal input and output Human-Computer
There will also be time slots for collective discussion and one coffee
break in the morning and in the afternoon.
For this full-day Workshop, the registration fee is 100 EURO for LREC
and 170 EURO for other participants. These fees will include coffee
breaks and the Proceedings of the Workshop.
Jean-Claude MARTIN, LIMSI-CNRS, martin <at> limsi.u-psud.fr
Elisabeth Den OS, MPI, Els.denOs <at> mpi.nl
Peter KÜHNLEIN, Univ. Bielefeld, p <at> uni-bielefeld.de
Lou BOVES, L.Boves <at> let.kun.nl
Patrizia PAGGIO, CST, patrizia <at> cst.dk
Roberta CATIZONE, Sheffield, roberta <at> dcs.shef.ac.uk
PRELIMINARY PROGRAM COMMITEE
Niels Ole BERNSEN
Elisabeth Den OS
Jan Peter DE RUITER