ARTIFICIAL INTELLIGENCE TOOLS
VSNExplorer MAM integrated with AI systems
State-of-the-art technology applied to media management with VSNExplorer MAM
VSNExplorer MAM media management platform has been integrated via API with the automatic metadata detection systems of IBM Watson, Google, and Microsoft Azure. Thanks to the integration of VSNExplorer MAM with these Artificial Intelligence tools, media management becomes much more efficient, accurate and easy, allowing for a greater control of the available content, both in storage and being ingested, and reducing the time and costs necessary to obtain a higher quality content.
Speech-to-text and automatic translation
Objects and audio efects detection
Media sentiment analysis
Context information extraction
AUTOMATIC METADATA DETECTION
VSNExplorer MAM integration with advanced Artificial Intelligence systems allows the automatic and exhaustive detection of metadata directly from the media, which can be accessed from VSNExplorer MAM cataloging view. This functionality allows for precise media files cataloging, even allowing users to train the system to improve the results. This function enables to quickly and easily retrieve files or video segments interesting for the user, both from videos previously stored and from videos being ingested, available almost immediately.
The big amount of information available and obtained from each media file by the AI functionality in VSNExplorer MAM allows for a faster and easier content retrieval. Users will be able to perform searches not only based on file information but also related to each of its segments, based on either image, audio fragments, objects, people or even media sentiment. This functionality brings a better degree of accuracy when searching a specific piece of video to complete a news piece, improving the quality of the content brought to your audience in less time.
TRANSLATION, CAPTIONING AND SUBTITELING
Language analysis, speech-to-text and automatic translation functions enable advanced automatic cataloging in multiple languages, allowing the user to run text searches on the audio layer. The system also delivers, together with the text, the segments with its mark-in and mark-out times, allowing for automatic generation of captions and subtitles in several languages. All of that, indexed in the system so it can be easily searched and retrieved.
Sentiment recognition, image and objects analysis and natural language analysis systems allow for the automatic detection of sensitive or adult content. These functionalities simplify the adaptation of those scenes where faces, objects or other images blurring is necessary.
The metadata automatically detected by AI systems allows for automatically linking and associating additional information extracted from the Internet (maps, locations, history, images, biography, etc.) to any concrete word that has been published. All of it thanks to the Linked Data, such as that provided by DBPedia, Geonames, etc.