Description of the challenges faced by the Tech Project
CONTENT4ALL stands for “Personalised Content Creation for the Deaf Community in a Connected Digital Single Market” and it is a project financed under grant nr. 762021 within the framework program H2020 of the European Commission. In 2001 the Convention of the United Nations prepared an agreement on the rights of persons with disabilities, which was ratified in 2008. Applying this legislation into the domain of TV, for Deaf people spoken TV content must be represented via sign language, whose production costs are very high. Hence, a lot of broadcasters use subtitling as a convenient solution. But for most Deaf people sign language is their mother tongue, written language approaches as a foreign language for them. Therefore, full accessibility of TV content for the Deaf can only be provided via sign-interpretation. The CONTENT4ALL project aims at making video content more accessible for the sign language community by implementing an automatic sign-translation workflow with a photorealistic 3D human avatar. A low-cost solution for personalized sign-interpreted content creation can address both of these problems, leading to greater accessibility to media content for Deaf users. CONTENT4ALL proposes such a solution to the problem in the short-term, which is also commercializable, and proposes innovations to technologies that can lead to automated sign-translation capabilities in long-term.
Brief description of technology
2 calibrated professional cameras and 1 Microsoft Kinect will record and stream a human sign-interpreter for (Swiss-)German and (Flemish-)Dutch sign languages in the domain of news – in particular sport and weather forecast. The data captured (video, positions) and metadata (subtitles, speech-to-text, scene script) will be analyzed and synthetized into language models and will serve to reproduce the human sign-interpreter into a photorealistic avatar. The vision of the CONTENT4ALL project is to translate spoken language in TV automatically into sign-interpreted language and present it via a 3D representation of a human sign-interpreter (photorealistic avatar). All the data captured and metadata streamed will be made available to the artist and we can investigate the possibility of sharing part of the avatar model, too.
What the project is looking to gain from the collaboration and what kind of artist would be suitable
We are an interdisciplinary team of experts of different domains (e.g., computer sciences, data modeling and broadcasting). We have also experts developing the user interaction, but rather from a functional point of view. What we lack is an expert, who communicates our technical possibilities to the addressed community with a high graphical feeling. We are open to any kind of artist’s ideas, but could imagine artist’s support in the way of e.g.: • Visualizing projects (intermediate) data, e.g. in form of graphic user stories or visual demos (videos) or the like. • An original and novel interpretation of the visual and spatial information of the sign interpreter captured. • Discussion with the artist to exploit the relationship between the human being and its synthetic representation In addition to this, the evaluation phase with the end users might be influenced and better described thanks to the artist point-of-view.
Resources available to the artist
We imagine the artistic residency to take place in Berlin. Two partners are based in Berlin, whose work is closely related to user needs and user representation. These partners would both, provide meaningful input regarding the user group und profit from artistic creativity the most. HFC Human-Factors-Consult is responsible for assessing user needs and requirements, designing the system user interface as well as conducting user tests, and Fraunhofer HHI is developing the 3D representation of the signer. Content4all will support the artist with in kind-support and mentorship on the project technologies and data. These will also be made available to the artist for producing his/her artworks.