The SOLIDE Project
|Voice-operated contextual information about situation reports for emergency personnel and staff in civil protection|
|Duration: 36 Months|
|Kickoff: August 2017
|Further consortium members:
In the event of emergency situations, major damage scenarios and catastrophes, emergency personnel is confronted with a huge amount of information from different sources. The information must be analysed, processed and made available for specific users to quickly construct a situation report – all without distracting the emergency personnel from their work.
Speech as a natural communication tool is the perfect match for this task when it is paired with innovative technology. Aristech has already made a name for itself in this field and the partners in the SOLIDE research project naturally decided on the German software developer from Heidelberg. Aristech is the only independent European provider which develops Speech Recognition (ASR), Speech Synthesis (TTS) and specialised semantics-based speech analysis tools with AI algorithms.
Through SOLIDE project, the Federal Ministry of Science and Reserach (BMBF) aims to develop reliable support for emergency services, as part of their announcement of “KMU innovative: civil security research” for the German government and their programme of “Civil Security Research”.
Details on Project and Research:
The results of SOLIDE will extend applications for situation reports through voice-operated controls and innovative data integration. Relevant information can be obtained quickly through spoken output and input, contributing to efficient support for all personnel in emergency responses. In addition, the system must be integrated into existing control systems to make the best use of current situation reports.
The SOLIDE project will develop new approaches to access integrated situation reports efficiently. The main focus is on using speech-based controls as well as innovative data integration techniques. All relevant data, for example sensor data, incident logs or geographical data, will be integrated into the situation report. The data will be accessible through research on specialist algorithms for filtering relevant knowledge and suitable connecting processes. The development of user interfaces and question answering algorithms will allow a spoken, distraction-free information retrieval.
The prototype system will be optimized and evaluated in several practice drills with users.
Further information can be obtained at: https://www.sifo.de/files/Projektumriss_SOLIDE.pdf .
The four project partners PRO DV, Aristech, Paderborn University and Bonn University are working together to conclude the project successfully within three years.
The Gambas Project
The GAMBAS project is a three-year European research project which started in February 2012. It is co-funded by the European Commission within the 7th Framework Programme in the area of Internet of Things under Grant Agreement No. 287661.
The overall objective of the GAMBAS project is the development of an innovative and adaptive middleware to enable the privacy-preserving and automated utilization of behavior-driven services that adapt autonomously to the context of users.
In this advanced and challenging project Aristech has taken over the research and development of data acquisition via speech and sound input. I.e. finding ways for the mobile application to gather information about the user’s environment and trip intentions by means of analyzing his auditive environment.
In the first phase of the project this meant a deep dive into signal processing to build the so called voice tagging component. Herefore Aristech has developed a tool to narrow down an audio file into a simple string, the audio fingerprint, without losing relevant information. Voice tagging gives a user the option to tag his favorite or most visited locations by dictating their names once he is there. Later he can then dictate the name again and will hereupon be directed to the location. As one of the main concerns in the GAMBAS project is resource efficient computing the voice tagging component was designed to reduce the audio input to a size as significant as necessary for re-recognition and as slim as possible to easily run on the device at the same time.
In the second phase of the project Aristech developed a feature called noise map which shall allow the categorisation of different environments just by their background noise. Furthermore Aristech built a robust, ressource efficient on device speech recognition component to allow serverless speech interaction between the user and the application.