video search engine

Digitally Transforming GLAM (Galleries, Libraries, Archives and Museums)

Some problems are so complex today that they can only be solved using AI. This certainly applies to what many consider to be the last frontier in Search technology - Audio Visual Media.

By using a combination of AIs (like speech and face recognition), Videospace unlocks hundreds and thousands of hours of knowledge within your media libraries by making them accessible and discoverable.

With the World's First Translated Video Search, we can further unleash the full potential of your media library by making them searchable in other languages! Extending its assessibility and discoverability!

Besides running Videospace on a world-class video platform (the same platform used by the 2012 and 2016 Olympics), we are using a combination of following advanced technologies:

  • Speech Recognition (over 100 languages)

  • Translation (over 60 languages)

  • Face Recognition

  • Video OCR (up to 26 languages)

  • Natural Language Processing (over 20 languages)

  • Video Search Engine - index and search video in time-series

  • World’s First Translated Search Engine – searches over 6,000 different language pairs

To find out more,


ANNOUNCEMENT: Global Launch of AIspace – The Next Generation of A.I. Storage

AIspace_banner_large.png

Singapore, 19 April 2019: - Babbobox officially announces the global launch of AIspace (pronounced as "i"space) – The Next Generation of A.I. Storage in Singapore.

Enterprises know the need for digital transformation, they also know that Artificial Intelligence (A.I.) will play a big role in that transformation. However, many perceive A.I. to be out of reach because they think it is either too costly or they have little knowledge about its benefits to them. This is about to change.

AIspace’s Mission is “To make A.I. accessible to all enterprises”

In AIspace, we are infusing A.I. into the something all enterprises need - Storage. By applying A.I. to digital assets (documents, images, audio and video) that already exist within your organization. For the first time ever, enterprises will be able to index and search inside all your documents, images, audio and video in a single platform using the World's First Unified Search Engine

Beyond Search, you can now apply A.I. to documents and images for analysis. This will open up a whole world of future possibilities, especially in Big Data. 

AIspace is simply what enterprise storage should really be - Intelligent!

About Babbobox

AISpace is a service fully owned by Babbobox. Babbobox developed one of world's most advanced A.I. Video Search Engine where it combines numerous advanced technologies (Speech Recognition, Video OCR, Cognitive Services, Image Analysis, Artificial Intelligence and Enterprise Search) into a single platform.

Babbobox started as a Cloud Document Management System focusing on helping enterprises to organize their digital assets. In 2017, Babbobox launched VideoSpace - the next generation of Video A.I. Platform. Babbobox has evolved and transformed to become a global leader in Video Search Engine technologies. We are using these breakthroughs in our data and video platforms to enable enterprises unleash the true value of their digital assets. 

AIspace is launched in 2019 with the mission of making A.I. accessible to all enterprise by means of infusing A.I. into storage.  

To find out more, please CONTACT US

Wishing all a Merry Christmas and a Fantastic New Year!

2018 has been another breakthrough year for us as we traveled the world and launched another TWO World's First:

We would like to thank all of you for taking this sensational journey with us! We just can't wait to take on 2019! 

Wishing you a Merry Christmas and a Fantastic 2019!

Yours sincerely, 

All of us at Babbobox

Video Big Data Whitepaper (FREE download)

video big data videospace

The term "Video Big Data" is rarely heard of. The reasons are pretty simple: 

  1. It's difficult to extract data from videos
  2. It's difficult make sense of unstructured video data

Therefore, it is not an understatement to say that video is the most difficult medium to search and extract intelligence from. However, given the amount of videos are that generated daily in the public domain (e.g. YouTube) and private domain (e.g. broadcasters, CCTV, education, etc.), it is also not an understatement to say that video is the King of Content. 

The objective of Big Data is to gain Business Intelligence. Video Big Data is no different. The obvious difference is the source and the type of data that can be extracted out from videos.

This Video Big Data Whitepaper aims to explain how we can extract value and intelligence from videos with a 3 step approach:

  1. Extract video data 
  2. Transform unstructured video data
  3. Analyse to data into intelligence 

With this whitepaper, we hope to share some of our knowledge and experiences working with Video Big Data. From our calculations, we estimate that Video Big Data will dwarf Big Data as we know it. Thus, the importance of this whitepaper. We hope you enjoy and benefit from it!

Yours sincerely,

The VideoSpace Team

Bringing AI Video Search to Broadcast Asia

alex-chan-babbobox-videospace-broadcast-asia.jpg

We are super excited about bringing our A.I. Video Search to Broadcast Asia after starting out in UK, US and China in 2018. It feels so good to be home!

Babbobox CEO, Alex Chan will be talking about "The Age of AI" and how it will transform the entire broadcast and media industry with Video Search, Personalized Content and Video Big Data.

We will also be making a big announcement and showcasing it during the show! We are pretty sure it will blow you away! So do drop by and say Hi!

Video Big Data (Part 2) - What kind of Video Data?

videospace-video-big-data.jpg

In the last installment, we explained:

  • Why Video Big Data will absolutely dwarf current Big Data
  • How Video is the most difficult medium to extract data from

Which explains why Video Big Data remains a largely unexplored field. But also means the intense opportunities available because we have not even scrap the tip of this huge data iceberg.

In this installment, we will examine the kind of data elements that we can extract from videos. 

1. Speech
In a hour of video, a person can say up to 9,000 words. So imagine the amount of data just from speech alone. However, the process of transcribing speech is filled with problems and we are currently only starting to get an acceptable level of accuracy.

2. Text
Besides speech, text is probably the second most important element inside videos. For example, in a presentation or lecture, besides speech the speaker would augment the session with a set of slides. Or news tickers appearing during a news broadcast. 

3. Objects
There are thousands of objects inside a video within different timeframe. Therefore, it can be quite challenging to identity what objects are in the video content and in which scene they appear in. 

4. Activities
The difference between video and still images is motion. Different video scenes contain complex activities, such as “running in a group” or “driving a car”. Ability to extract activities will give a lot of insight what the videos are about. This includes offensive content that might contain nudity and profanity.

5. Motion
Detecting motion enables you to efficiently identify sections of interest within an otherwise long and uneventful video. That might sound simple, but what if you have 10,000 hours of videos to review every night? That’s a near impossible task to eyeball every video minute.

6. Faces
Detecting faces from videos adds face detection ability to any survelliance or CCTV system. This will be useful to analyze human traffic within a mall, street or even a restaurant or café. When we include facial recognition, it opens up another data dimension.

7. Emotion
Emotion detection is an extension of the Face Detection that returns analysis on multiple emotional attributes from the faces detected. With emotion detection, one can gauge audience emotional response over a period of time.

This list of video data is certainly not exhaustive but is a definitely a good starting point to the field of Video Big Data. In the next installment, we will examine some of the techniques used to extract these video data. 

Yours sincerely,

The VideoSpace Team

Video Big Data (Part I) - An Introduction

videospace-video-big-data.jpg

Fact: YouTube sees more than 300 hours of videos uploaded every minute. That's 18,000 years worth of videos in a year. And that's just YouTube ONLY! If we add all other videos in the public domain, we wouldn't even know where to start with the numbers. 

However, the even bigger numbers are actually hidden in the private domain from sources like broadcasters, media companies, CCTVs, GoPros, bodycams, smart devices, etc. We are recording videos at an unprecedented pace and scale. 

There is one word to describe this - BIG!

Which brings us to Video Big Data. Or should I say the lack of it. Even the term "Video Big Data" is rarely heard of. The reason is pretty simple - this stems from the inability to extract video data and making sense of it. But there is so much information embedded inside videos that is waiting to be discovered, it's an absolute goldmine! 

So the real question is... how can we extract value from videos?

However, the problem with video is that it is the most difficult medium to work with. There are a few reasons why: 

  • There are so many elements inside a video (speech, text, faces, objects, etc)
  • It is not static.
  • It is very difficult to extract the various elements of video data. 
  • Each video element requires a different data extraction technique.
  • It is very difficult to make sense of video data because of its unstructured nature.
  • It's expensive to extract data at scale

These problems are real and is preventing the arrival of Age of Video Big Data. But there is hope yet. With substantial use of Artificial Intelligence, VideoSpace is beginning to crack this enigma. 

In the next segment of this "Video Big Data" Series, we will examine how we can tackle these problems and extract value from videos. 

Babbobox featured on The Record for their World's First

Our launch - "World's First Video Search Engine with Interactive Results" in Birmingham (UK) was picked up by The Record and given some airtime. It feels great to be picked up and be given that bit of recognition for doing what we do to a global audience. 

Click HERE for the article.

Note: The Record is a global magazine featuring the Best of Enterprise Technology on The Microsoft Platform.

Thank you Birmingham... Hello Washington!

Finally, 2 intensive days of MS Tech Summit in Birmingham... done and dusted. Absolutely the right decision to come to UK to do this. Massive event! Exactly the right platform to showcase our Video Search technologies. 

babbobox-ceo-alex-chan
babbobox-ceo-alex-chan-clevertime-joao-penha-lopes
mstechsummit-birmingham

Caught up with Scott Guthrie. Held so many in-depth discussion with so many UK enterprises, universities, government agencies, etc. If we have our way, our stuff might even end up in Scotland Yard! So let's see... 

Good-bye Birmingham... Next stop, Trump-capital Washington in March! I'm excited already...

ANNOUNCEMENT: Global Launch of World’s First “Video Search Engine with Interactive Results”

Birmingham, 24 January 2018: - Babbobox and Infini Videos officially announce the launch of the world’s first  “Video Search Engine with Interactive Results” at the Microsoft Tech Summit held in Birmingham, United Kingdom today.

Both tech start-ups Babbobox and Infini Videos believe the future of video search lies in immediate content relevance. Video has proven to be the hardest medium to index because there is so much detail.  Aside from the metadata that an editor may have typed in, most archived videos are essentially unstructured data.  Often, this is because transcripts are not made and scripts are lost, or there isn’t sufficient timing information to align with the video.

To make sense of this data, techniques such as Speech Recognition, Video OCR, Image analysis and various Cognitive and Artificial Intelligence are applied to extract data from media. Since much of the videos in archives contains speech, therefore automatic transcription is a great first step in extracting data from media. With the transcript, an editor is able to search for timecodes in source videos, scrub through those sources, and manually locate viable scenes. This manual process is time-consuming, and not suitable for public use since a text search result does not make for a watchable video.

The innovative “Video Search Engine with Interactive Results” that Babbobox and Infini Videos have co-produced, allows a user to search a topic, and immediately view the search results as an interactive video. One is able to immediately choose scenes within the video that are relevant to the search. With the full-automation of the indexing and the output scene selection, productivity is enhanced and content for the public will scale up. The platform promises increased productivity as it will provide users with fine control over topics and sources, and allow editors to focus on the direction of the stories.

Babbobox and Infini Videos believe that this will be the future of Video Search.

About Babbobox (website: www.babbobox.com)
Babbobox launched two World’s First - World’s First True Unified Search Engine where it has the ability to index and “Search Everything” (all formats including video, audio and documents). Thus, positioning Babbobox to become "The Next Generation of Intelligent Storage". With VideoSpace, we created the World’s Frist Video-Search-as-a-Service. Thus, forming the foundations to enable a new breed of video services for the world.

About Infini Videos (website: www.infinivideos.com)
Infini Videos is a B2B online technology platform for the creation and delivery of HTML5 interactive videos. Infini Videos makes it easy to create engaging interactive videos as well as to access the rich data analytics offered on the platform. The company currently offers branching aka “Choose-your-own-adventure” and 360-degree types of interactivity. In addition to the technology platform, the company also provides specialized creative services as a one-stop solution for clients. Infini Videos is part of the Mediacorp’s MediaPreneur Incubator programme.

To find out more, please CONTACT US

Bringing VideoSpace to the World... Starting with Birmingham

MSTechSummit_Birmingham_728x90.png

Many asked why we were not present at Microsoft Tech Summit Singapore this week... The reason is simple, that's because we will be in the Birmingham leg (24 to 25 Jan) next week instead!

I promise that we will be making a BIG announcement next week. And it will be another World's First! (Hint: We are bringing Video Search to another level...)

babbobox tech summit birmingham

It definitely feels great to see Babbobox listed as part of the invited companies to show our wares at this Microsoft Tier-1 event. On top of that, I can also that we are representing Asia alongside Yamaha.

If any of you do happen to be in Tech Summit Birmingham, do drop by and say Hi!

Episode X - A New Search (and a Forceful 2018!)

CLICK to watch how VideoSpace aid the Rebels in their search for the New Death Star...

In search of the new Death Star plans, the Rebels secretly planted thousands of cameras within the Empire in hope of finding clues. However, with thousands of videos and impending deadline, how can the rebels possibly search for information within videos?! Fortunately...

In a galaxy, not far away... 

VideoSpace Search Engine is helping the Rebels searches inside videos for speech, text, objects, motion, faces, emotions (note: not for stormtroopers since they wear helmets) automatically! Thus, finding vital clues for the New Death Star plans...

In the meantime, as we help the Rebels fight the Evil Empire, we wish you a...

Forceful 2018! 

From, 

All of us at Babbobox

May The Search Be With You!

What is a Video Search Engine? Part IV – Detecting Faces and Emotions

The ability to detect faces has been around for some time for real time CCTV systems. However, these systems remain out of reach for many as they are expensive and would need specialized implementation that would drive the cost up higher. Therefore, detecting faces from videos instead is a viable alternative because it instantly adds face detection ability to any CCTV system.

Detecting faces allows you to count, track movements by detecting unique faces. Face detection finds and tracks human faces within a video. Multiple faces can be detected and subsequently be tracked as they move around.

video search engine face detection

This will be useful to analyze human traffic within a mall, street or even a restaurant or café. It would be possible to identify and track movement of unique human faces. Therefore, it is possible to perform a headcount of human traffic within the video. 

Beyond detecting faces, it is more possible to detect emotions. Emotion Detection is an extension of the Face Detection video search that returns analysis on multiple emotional attributes from the faces detected, for example happiness, sadness, fear, anger, etc.

video search engine emotion detection

Recognizing the emotion of a person or crowd over time based allows us to track the emotional highs and lows within a particular time-frame. It also allows us to track someone’s emotions at a specific point of time. Answering questions like, how did the crowd react when the President makes a particular point? With emotion detection, it can be applicable to gauge audience responses in scenarios like:

  • Speeches
  • Focus groups
  • Group reactions
  • Interviews

Emotion detection can form a very good baseline for the scenarios above.

To find out more about how you can detect faces and emotions inside your videos, visit VideoSpace Video Search Engine or our Video-Search-as-a-Service.

What is a Video Search Engine? Part III – Detecting Motion

In Part I and II, we examined how we would be able to search Speech and Text inside videos. In Part III, we will look at one of the first names given to videos – “Motion” Picture. 

So, all videos have motion? That may not be true, not all videos have motion (or movement) all the time, especially in the case of security and surveillance videos.

video-search-engine-motion-detection.jpg

Detecting motion in videos enables you to efficiently identify sections of interest within an otherwise long and uneventful video. That might sound simple with a single video, but what if you have 10,000 hours of videos to review every night? That’s a near impossible task to eyeball every video minute.

Motion detection can be used on static camera footage to identify sections of the video where motion occurs.

  • Detect when motion has occurred in videos with stationery backgrounds
  • Eliminate false positives caused because of light changes, shadows, small insects, and others

While there are motion sensors that can detect motion real-time, these systems tend to be expensive. Thus, the reason why most of the CCTV surveillance systems only does recording at best. Therefore, there are many scenarios that does not require real-time motion detection, like detecting a car entering a bus lane during peak hours.

video search engine bus lane detection

Current technology has come to a point where it is able to differentiate between real motion (such as a person walking into a room), and false positives (such as leaves in the wind, along with shadow or light changes). This allows you to generate security alerts from camera feeds without being spammed with endless irrelevant events, while being able to extract moments of interest from extremely long surveillance videos.

To find out more about how you can detect motion inside your videos, visit VideoSpace Video Search Engine or our Video-Search-as-a-Service.

What is a Video Search Engine? Part II – Searching Text

In Part I, we found out that there are 7099 living languages in the world. That includes both written and spoken only languages. According to Ethnologue (20th edition) out of that 7,099 living languages, 3,866 have a developed writing system.

Which leads us to this second part of our series – Searching Text inside a video. Besides Speech, Text is probably the second most important element where we can extract data from.

For example, in a presentation or talk given by a speaker. Besides speech, the speaker would augment the session with a set of slides. Therefore, besides his voice, text (in the slides) is another set of data that can be captured. This is important because what he says and what he present in the slides can be vastly different.

Text that can be OCRed during a presentation

Text that can be OCRed during a presentation

The technology to capture these text inside the video is called Video OCR (Optical Character Recognition). Video OCR is derived from OCR, a technology that has been around a long time.

By strict definition, Optical Character Recognition (OCR) is the mechanical or electronic conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo or from subtitle text superimposed on an image (source: Wikipedia). The first OCR machine that read characters and converted them into standard telegraph code was invented by Emanuel Goldberg in 1914!

Unfortunately, one hundred years on, OCR technology still has some ways to go, especially in the field of adding more language capabilities and recognizing handwriting. However, with more A.I. and Machine Learning, the hope is that researchers can add more capabilities to what OCR can do now.

However, Video OCR is giving OCR a new lease of life by simply adding another dimension – moving images. Given the amount of videos that has never been OCRed before and the amount of videos being generated every day, the potential for Video OCR is immerse.

To find out more about how you can search TEXT inside your videos, visit VideoSpace Video Search Engine or our Video-Search-as-a-Service.

Video Platform from a DevOps prespective - Babbobox CTO, Sabrina Lim at CloudExpo Asia 2017

Babbobox CTO, Sabrina Lim (yes.. she's a female CTO), will be speaking at CloudExpo Asia - DevOps Live at 12.05pm on 12 Oct.

Babbobox CTO Sabrina Lim

Essentially, she'll be speaking about a combination technologies covering Media, Search, A.I., Cognitive and unstructured data (all the stuff that we are using) from the DevOps perspective. So yes... it'll be a bit geeky and techie!

So if you are at the show, do drop by and say hi!

Microsoft recognizes Babbobox as key global partner for Media Services

We are delighted to be listed as a key Microsoft global partner for Azure Media Services on http://amslabs.azurewebsites.net/. (Please scroll down)

babbobox video search engine

This is in recognition of Babbobox's pioneering Unified Search Engine and Video-Search-as-a-Service where both services are World's First. 

"We are honoured to be invited by Microsoft to be part of this exclusive club of partners." said Alex Chan, Babbobox's CEO, "Considering Babbobox is still a relatively new entrant in comparison to other partners, to be invited into this elite group shows that Microsoft recognizes the huge potential in the things that we are doing. That Babbobox is pushing the boundaries and transforming the way we search in future."

Find out more about Babbobox Search Engine.
Watch this space for more updates!

World's First Video-Search-as-a-Service (VSaaS)? #Part 2 - Searching Text

Think of VideoSpace Video-Search-as-a-Service as the Ultimate SEO Engine. That's because we are able to index and search videos in 6 areas: Speech, Text, Motion, Face, Emotion and Offensive Contention. We use Artificial Intelligence to automatically extract data from your videos so that you can make better sense of your videos. More importantly, this service is absolutely affordable! 

#PART 2 - SEARCHING TEXT

Search Engine today can only crawl video "Title" and "Metadata". However, the main problem is that this search is limited by the words that are manually entered into these fields. Many do not even have metadata which makes their videos virtually impossible to search. 

If you have a library of video PRESENTATIONS, this is exactly what you need!

Video-Search-as-a-Service does the following automatically:  

  • Video OCR text in the video (available in 26 languages1)
  • make the speech searchable
  • generates SEO for the video

If you can't find it, you can't use it!

It's as simple as that. Search is a fundamental human and organisational need. Video Content can be expensive to produce and maintain. What VSaaS really does is enhancing the video content that you or your organisational possess. 

We will be launching our Video-Search-as-a-Service (VSaaS) at the Singapore leg of the Microsoft Tech Summit held between March 13 to 14, 2017. This event has filled up quickly, so I encourage you to reserve a spot NOW! So click on the link below to register! 

What is a Video Search Engine?

Let's dissect this into 2 parts - "Video" and "Search Engine". 

Starting with "Search Engine" first. We are so used to using search engines today that we do not really bother with how a search engine really works. And perhaps you shouldn't... why should you as long as the results are good. We normally start questioning (or complain) when the results are not what we expect it to be. 

So a good search engine should do a couple of things. It should (let's get a bit technical) have:

  • A good Indexing engine
  • Phrase matching
  • Smart Search result Summary 
  • Keyword highlighting
  • Stemming/Lemmas (Word form variations are searched and ranked lower)
  • Complex expression support; nested groups, partial matching, NOT, OR and AND
  • Multiple Format indexing
  • Unicode and non English language support

It all the above these parameters are measurable, you will be able to figure out if one engine is better than another. 

So the format that we want to search is "Video". Today, typical search engines can only search "Title" and "Metadata". Even if both "title" and "metadata" are well defined and representative of the video itself, what is missing is the content. Imagine you have a thousand page document and you can only search the document title and it's summary. That's the current state of affairs for video search. 

So of course the next question is what do you want to search from a "Video"? That's like opening Paradox's Box. Unlike a piece of document, video is multi-dimensional and contains a lot more information. For example, speech, words, people, objects, movement, colours, etc. 

Currently, many of these search technologies still do not exist or are barely in their infancy. What is available now, is just scratching the tip of the iceberg. Therefore, the real definition of what is a video search engine is currently evolving. 

At VideoSpace, we would like to define our version of VIdeo Search Engine. Where our VIdeo Search Engine is able to search six key areas:

  • Speech Recognition
  • Words (or Text)
  • Motion Detection
  • Facial Detection
  • Emotion Detection
  • Offensive Content Detection

Numbers reports say the same thing. By 2017, videos will account for more than 70% of all internet traffic. Imagine you have the ability to search videos in future. 

The VideoSpace Video Search Engine is taking the leap now. 

Difference between VideoSpace's Video Search Engine vs. other search engines

We get quite a lot people asking us the same question... "What's the difference between VideoSpace Video Search Engine and other search engines?". And we thought the best way is to use icebergs to explain this. 

Typical search engine today can only search the tip of the iceberg, but we all know more than 90% of the real content is submerged underwater. That's the difference! 

Search engines today can only search "Title" and "Metadata". But VideoSpace's Video Search Engine can also search for:

  • "Title"
  • "Metadata"
  • "Speech" (spoken in the video) 
  • "Words" (appearing in the video)

In the example below, the "Title" and "Metadata" provides very little evidence of the actual discussion made in the panel session. Without VideoSpace Video Search Engine, we wouldn't know that the term "Arts Program" was mentioned 5 times. Or even "Japan" was mentioned during the panel discussion. 

With the VideoSpace Video Search Engine, we can even pinpoint exactly where the word "Japan" was spoken. 

This is something you need to see for yourself!

#1 Click on the image below
#2 Type "Japan" into the search box
#3 Click onto any of the resultant links