videospace

Using AI to promote Singapore start-ups with Videospace

Using-AI-to-promote-singapore-startups_videospace.jpg

Besides naming Babbobox as one of the top start-ups for FUTURE of ENTERPRISE AI, Enterprise Singapore has adopted our Video AI platform - Videospace, to promote Singapore start-ups.

Jointly organised by Enterprise Singapore and the Monetary Authority of Singapore, Deal Fridays is a platform for dealmaking opportunities as a lead up to the Singapore FinTech Festival (SFF) x Singapore Week of Innovation and Technology (SWITCH) 2020.

Using various video AI from Videospace, the team from Deal Friday is able to reduce manual processes by 8 to 10 times, while providing benefits where only AI can provide, like deep video search.

videospace_dealfridays.png

“We are delighted to be named as one of Singapore’s top start-ups” Alex Chan, CEO of Babbobox “The bigger validation is that Enterprise Singapore recognises our value proposition and adopted Videospace . We are proud to become part of the Deal Fridays programme and eco-system in promoting Singapore’s top start-ups.”

Videospace-logo-color-250x129.jpg

Announcement: Babbobox named as one of the top start-ups for Enterprise AI

We are delighted to be named as one of the top start-ups for Future of Enterprise AI by Enterprise Singapore.

ESG logo.jpg

Considering that we are still a relatively small team, we are delighted to be acknowledged for the work that we’ve done.

We would really like to thank the team at Enterprise Singapore for this recognition and their effort in promoting Babbobox internationally under the Deal Fridays programme where we are featured here (http://dealfridays.videospace.co/vod/video.aspx…)

Keep Calm and Carry on Part 3 - 360 Videos (Beta)

keep-calm.jpg

This COVID-19 period is proving to be one of our most innovative periods ever. Our foray into 360 Videos is our third beta in 3 months after:

360 videos is the way to go if you want to do a virtual tour or an interactive immersive experience

We are delighted to announce that Videospace will be supporting 360 videos on our enterprise grade video platform. 360 videos, also known as immersive or spherical videos, are video recordings where a view in every direction is recorded at the same time. Research shows there are clear benefits of 360 degree video content:

  • the unlimited possibility it gives to viewers and content creators

  • encourages engagement

  • over 3 times the conversion rate of traditional video content

  • over 30% higher repeated view rate.

  • 70% who have used 360 videos say it has increased engagement


Need 360 content production? Fret not!

We are partnering iMMERSiVELY for cutting edge 360 video content production. iMMERSiVELY is a creative startup specialising in immersive media technologies – producing applications and solutions for businesses across various industries. From Augmented and Virtual reality content and technology development, to 360° content production and more, iMMERSiVELY harnesses these technologies to advance communication, innovation, story narratives and in education.

“We are excited about this partnership because this partnership opens up new opportunities and frontiers for both Babbobox and iMMERSiVELY.” says Babbobox CEO, Alex Chan. “Both companies believe that this partnership will only strengthen and solidify our pioneers status in the virtual and media industry”.

KEEP CALM and CARRY ON Part II - Video Live Transcription (Beta)

keep-calm.jpg

About a month ago, we launched our first beta (Translated Machine Speech) during this COVID-19 period. Barely a month passed, we are at it again! This time, our beta is Video Live Transcription. This is done while delivering video via live-streaming in high definition.

That’s our CEO, Alex doing a test live-stream from home

That’s our CEO, Alex doing a test live-stream from home

Hang on… you might think. Isn’t this been done before? The answer would be yes. We have probably experienced some form of live transcribing before on TV or on video conferencing platforms. BUT doing this for a live video event on a live-stream is a totally different matter.

Those who can deliver this is limited a handful of broadcasters with specialized equipment. Also, if budget is not a problem, one can even get a team of human transcribers to provide live captions. However, this also means it’s expensive and not exactly scalable. That’s the reason why we don’t see live transcribing much at all on broadcast or live-streaming.

Just ask anyone in broadcast, they would appreciate the difficulties in delivering a service like this. Getting the live transcription is one thing, but getting the live caption or subtitles in sync with the video and audio feed is a totally thing.

Providing live captions has the following benefits:

  • Improves Accessibility - especially for hearing-impaired. Making your event more inclusive.

  • Increases Engagement - provide better comprehension of what is said

  • Improves Comprehension - longer view times and more interaction with your brand

So this is what this beta is about. It is to provide these benefits with a simple and highly affordable video live transcription service along with your live-stream.

If you this might be useful to you, please write to me! Or forward this to someone whom you think might be useful. Please note that my COVID-19 message applies. 

Stay indoors! Stay safe! Stay healthy! But most importantly, stay positive!

Yours sincerely,

Alex Chan
CEO, Babbobox

KEEP CALM and CARRY ON - Translated Machine Speech (Beta)

keep-calm.jpg

What a time to launch a beta! Right in the middle of a lock-down from a global pandemic. Even while we are all indoors grappling and making adjustments to the current situation, we need to live and work as normally as we can. Our team had a long chat about this and decided that we must press on, including innovation. 

The objective of this beta is rather simple - To convert a video into multiple languages using machine speech

Perhaps you have read my COVID-19 message a couple of weeks ago, here's the short version in Hindi (with machine speech). Or you can listen to the same message in 9 different languages HERE

The video is based on my original English version, but I have no idea how accurate it is since I don't understand Hindi. However, to improve accuracy, users can edit both the original and target language text before finalizing the audio. 

There is a story to this beta. The idea came from a client about a year ago. They needed to overcome the language barrier so to spread their message to different countries and communities. While subtitling helps, but it's just not the same as speech. We were too engaged with other work then to do anything about it, but it has always been on the back of my mind. So fast forward one year, the beta is born. 

Translated Machine Speech is available in 45 languages. Some languages are available in both male and female voices. 

Translated Machine Speech is available in 45 languages. Some languages are available in both male and female voices. 

We think this can solve some real world problems. It is particularly useful for global engagements, speeches, announcements, lessons, training, sermons, news, etc. But we are limited by what we know. Thus, we would love to work with some real world scenarios and shape this into something useful. 

If you can think of a scenario, please write to me! Or forward this to someone whom you think might be useful. Please note that my COVID-19 message applies. 

Stay indoors! Stay safe! Stay healthy! But most importantly, stay positive!


Yours sincerely,

Alex Chan
CEO, Babbobox

Our response to COVID-19. A message from CEO Alex Chan

Dear all,

These are extra-ordinary times. Schools closed, businesses shut, events cancelled, classes cut, etc. because of COVID-19. Businesses globally are going to be affected... and I fear many wouldn't survive this crisis.

I'm just going to put it out here. If you, or anyone you know, need a Video AI platform like Videospace (www.videospace.co) to keep your business going, write to us HERE.

We can't make it free, but we promise to make it as close to cost as possible. In particular, if you are an SMB, event organiser, in training/teaching, NGO, etc. MNCs need not apply because it’s the small businesses that are most at risk.

In times like this... I believe we need to help each other to ride this out. Business is an eco-system, no one wins by being the last one standing. I much rather we stand together till the end of this crisis.

I wish you good health, remain resilient, keep positive, and most important of all, stay safe. Let’s ride this one out!

Yours sincerely,

Alex Chan

CEO

Announcement: Videospace is now a registered Trademark

logo_videospace.png

We are happy to announce that Videospace is now officially a registered trademark.

“A registered trademark provides protection for both ourselves and our customers, making it an important part of running a successful service” said Alex Chan, Babbobox CEO. “This shows our pride and commitment in making Videospace a robust and reliable service for our customers.”

We would like to take this opportunity to thank everyone for your support. We look forward to many more successful engagements in the future.

Videospace is the official Video AI platform for Singapore Fintech Festival 2019 (11 - 15 Nov 2019)

We are excited that Videospace is the official Video AI platform that will be powering Singapore Fintech Festival 2019 (11 - 15 Nov 2019). This will be one of the largest Fintech event in Asia-Pacific with an expected audience of 45,000.

videospace_sff2019.jpg

The annual event itself is transforming digitally by providing a AI-infused video platform (SFFxSwitch GO) where subscribers can discover fresh concepts and ideas by viewing presentations with searchable and translatable subtitles of speakers from the best of the industry.

“We are excited that Videospace is handpicked as the official Video On-Demand platform for SFF 2019.” says Alex Chan, Babbobox CEO. “We understand the importance of this event for Singapore and around the region. Thus, we are thrilled that the audience will experience SFF in a transformational way.”

We believe this will transform conference as an industry:

  • Who? Especially beneficial for those who can't attend.

  • How? Physically or virtually attending.

  • When? During and beyond the event dates.

  • What? It is impossible to attend every talk during an event, this will allow one to.

  • Why? It is just a natural extension of an event like SFF.

Videospace is enabling the journey of journey of digital transformation of the conference and events industry.

Note: Videospace runs on Microsoft Azure.

Videospace is powering the ITAP 2019 (Industrial Transformation ASIA-PACIFIC 22 - 24 Oct)

We are excited that Videospace is the Video AI platform that will be powering the Industrial Transformation ASIA-PACIFIC, a HANNOVER MESSE event which is Asia-Pacific’s leading trade event for Industry 4.0 with an expected audience of 18,000.

ScreenHunter_1306 Oct. 18 10.50.jpg

Not only is the event about Industrial Transformation, the event itself is transforming digitally by providing a AI-infused learning platform (ITAP Academy) where subscribers can discover fresh concepts and ideas by viewing presentations with searchable and translatable subtitles of speakers from the best of the industry.

We believe this will transform conference as an industry:

  • Who? Especially beneficial for those who can't attend.

  • How? Physically or virtually attending.

  • When? During and beyond the event dates.

  • What? It is impossible to attend every talk in an event, so why must we make the attend choose what?

  • Why? It is just a natural extension of any event. We know there is a demand for knowledge.

AI is just enabling the supply... in the journey of necessary transformation.

P.S. In case you are wondering, it's running on Microsoft Azure.

Digitally Transforming GLAM (Galleries, Libraries, Archives and Museums)

Some problems are so complex today that they can only be solved using AI. This certainly applies to what many consider to be the last frontier in Search technology - Audio Visual Media.

By using a combination of AIs (like speech and face recognition), Videospace unlocks hundreds and thousands of hours of knowledge within your media libraries by making them accessible and discoverable.

With the World's First Translated Video Search, we can further unleash the full potential of your media library by making them searchable in other languages! Extending its assessibility and discoverability!

Besides running Videospace on a world-class video platform (the same platform used by the 2012 and 2016 Olympics), we are using a combination of following advanced technologies:

  • Speech Recognition (over 100 languages)

  • Translation (over 60 languages)

  • Face Recognition

  • Video OCR (up to 26 languages)

  • Natural Language Processing (over 20 languages)

  • Video Search Engine - index and search video in time-series

  • World’s First Translated Search Engine – searches over 6,000 different language pairs

To find out more,


Wishing all a Merry Christmas and a Fantastic New Year!

2018 has been another breakthrough year for us as we traveled the world and launched another TWO World's First:

We would like to thank all of you for taking this sensational journey with us! We just can't wait to take on 2019! 

Wishing you a Merry Christmas and a Fantastic 2019!

Yours sincerely, 

All of us at Babbobox

Babbobox co-sells through Microsoft to fuel global expansion

babbobox videospace

Babbobox is leveraging the co-selling capabilities of Microsoft to fuel growth, innovation and expansion plans globally.

The Singapore-based Video Search Engine expert is tapping on the tech giant’s internal channel changes in the form of One Commercial Partner (OCP) to transform into an international player.

“Babbobox’s relationship with Microsoft did not happen overnight, we believe that it is a strong and one that can stand the test of time,” said Alex Chan, Babbobox CEO.

“We have received tremendous support from numerous Microsoft’s local and international teams, industry specialists, product engineering and product business group teams, working with them on a wide range of joint initiatives locally and around the world.”

Specific to co-selling, “This partnership allows us to continue to capitalise on innovations from Microsoft before they are available in the market. We have experienced significant benefits from greater access to innovation and expertise through Microsoft.” Alex added.

As Babbobox partnership with Microsoft deepens, we are looking to continue to expand geographically beyond our offices in Singapore and Malaysia. With this program, we will also look to enlist the support of Microsoft’s network of partners, distributors and resellers in various verticals globally. 

“We have an ambitious roadmap globally, innovating A.I. infused products that will keep Babbobox on the cutting edge of modern technological trends. We have come a long way, but this is only the start of our global journey” Alex said.

Bringing AI Video Search to Broadcast Asia

alex-chan-babbobox-videospace-broadcast-asia.jpg

We are super excited about bringing our A.I. Video Search to Broadcast Asia after starting out in UK, US and China in 2018. It feels so good to be home!

Babbobox CEO, Alex Chan will be talking about "The Age of AI" and how it will transform the entire broadcast and media industry with Video Search, Personalized Content and Video Big Data.

We will also be making a big announcement and showcasing it during the show! We are pretty sure it will blow you away! So do drop by and say Hi!

Video Big Data (Part 3) – From Mess to Intelligence?

The objective of Big Data is to gain Business Intelligence. Video Big Data is no different. The obvious difference is the source and the type of data that can be extracted out from videos. In there, lies the main challenges - Extraction, Transformation and Analysis.

videospace-video-big-data.png

In this instalment, we will explain why Artificial Intelligence is central to the “mess” in video big data.

In the first installment (Part 1), we explained:

  • Why Video Big Data will absolutely dwarf current Big Data, and
  • How Video is the most difficult medium to extract data

In the previous instalment (Part 2), we examined:

  • the kind of data elements that we can extract from videos (speech, text, objects, activities, motion, faces, emotions)

But first, let’s examine why there is a mess in video data. The short explanation is because a large part of video data is unstructured data. In particular, data from speech and text. For example, text extracted from a 30 minutes news segment could cover multiple topics and events, mentioned numerous places and persons. To add to the complexities, we have to time-aligned when these words are spoken. In many ways, text (e.g. slide presentations that appear in videos) are the same.

Thus, we have to answer 2 key question:

  1. How do we meet sense of ‘messy’ video data?
  2. How can we extract knowledge or intelligence from that mess?

The answer lies in another form of Artificial Intelligence (A.I.) - the study of Natural Language Processing (NLP). That is because it can process and attempt to make sense of unstructured text in the following areas:

  • Topic detection
  • Key phrase extraction
  • Sentiment analysis

The reason is because NLP can be used to turn unstructured video data into structured data. Only then can we start making sense and manipulating the data into either intelligence or actionable items like alerts, triggers, etc.

The field of Video Big Data is just starting. Without the advancement in multiple areas of Artificial Intelligence in multiple areas (Speech Recognition, Computer Vision, Facial Analysis, Text Analytics, etc.), Video Big Data wouldn’t even exist as it needs these fields to work in tandem or in sequence.

Given the rate that we are producing videos, alongside our ability to extract video data using A.I. The only way is up and we are not even close to uncovering the tip of Video Big Data iceberg.

Video Big Data will be bigger than BIG. 

VideoSpace will be right in the middle of it all. Let’s put this prediction into a time capsule and revisit it in a few years.

Video Big Data (Part 2) - What kind of Video Data?

videospace-video-big-data.jpg

In the last installment, we explained:

  • Why Video Big Data will absolutely dwarf current Big Data
  • How Video is the most difficult medium to extract data from

Which explains why Video Big Data remains a largely unexplored field. But also means the intense opportunities available because we have not even scrap the tip of this huge data iceberg.

In this installment, we will examine the kind of data elements that we can extract from videos. 

1. Speech
In a hour of video, a person can say up to 9,000 words. So imagine the amount of data just from speech alone. However, the process of transcribing speech is filled with problems and we are currently only starting to get an acceptable level of accuracy.

2. Text
Besides speech, text is probably the second most important element inside videos. For example, in a presentation or lecture, besides speech the speaker would augment the session with a set of slides. Or news tickers appearing during a news broadcast. 

3. Objects
There are thousands of objects inside a video within different timeframe. Therefore, it can be quite challenging to identity what objects are in the video content and in which scene they appear in. 

4. Activities
The difference between video and still images is motion. Different video scenes contain complex activities, such as “running in a group” or “driving a car”. Ability to extract activities will give a lot of insight what the videos are about. This includes offensive content that might contain nudity and profanity.

5. Motion
Detecting motion enables you to efficiently identify sections of interest within an otherwise long and uneventful video. That might sound simple, but what if you have 10,000 hours of videos to review every night? That’s a near impossible task to eyeball every video minute.

6. Faces
Detecting faces from videos adds face detection ability to any survelliance or CCTV system. This will be useful to analyze human traffic within a mall, street or even a restaurant or café. When we include facial recognition, it opens up another data dimension.

7. Emotion
Emotion detection is an extension of the Face Detection that returns analysis on multiple emotional attributes from the faces detected. With emotion detection, one can gauge audience emotional response over a period of time.

This list of video data is certainly not exhaustive but is a definitely a good starting point to the field of Video Big Data. In the next installment, we will examine some of the techniques used to extract these video data. 

Yours sincerely,

The VideoSpace Team

Video Big Data (Part I) - An Introduction

videospace-video-big-data.jpg

Fact: YouTube sees more than 300 hours of videos uploaded every minute. That's 18,000 years worth of videos in a year. And that's just YouTube ONLY! If we add all other videos in the public domain, we wouldn't even know where to start with the numbers. 

However, the even bigger numbers are actually hidden in the private domain from sources like broadcasters, media companies, CCTVs, GoPros, bodycams, smart devices, etc. We are recording videos at an unprecedented pace and scale. 

There is one word to describe this - BIG!

Which brings us to Video Big Data. Or should I say the lack of it. Even the term "Video Big Data" is rarely heard of. The reason is pretty simple - this stems from the inability to extract video data and making sense of it. But there is so much information embedded inside videos that is waiting to be discovered, it's an absolute goldmine! 

So the real question is... how can we extract value from videos?

However, the problem with video is that it is the most difficult medium to work with. There are a few reasons why: 

  • There are so many elements inside a video (speech, text, faces, objects, etc)
  • It is not static.
  • It is very difficult to extract the various elements of video data. 
  • Each video element requires a different data extraction technique.
  • It is very difficult to make sense of video data because of its unstructured nature.
  • It's expensive to extract data at scale

These problems are real and is preventing the arrival of Age of Video Big Data. But there is hope yet. With substantial use of Artificial Intelligence, VideoSpace is beginning to crack this enigma. 

In the next segment of this "Video Big Data" Series, we will examine how we can tackle these problems and extract value from videos. 

Launch Announcement - “Translated A.I. Video Search” to break Language Barriers for Video Search in Washington D.C.

Washington DC, 5 March 2018: - Babbobox officially announces the launch of the World’s First “Translated A.I. Video Search” at the Microsoft Tech Summit held in Washington DC, United States.

Humans are not only divided geographically, but also by language. Today, we are lifting this language barrier and allowing video search in a language that you do not understand.

Imagine you are doing research on Japanese culture and the only language that you know is English. How would you research videos that are in Japanese? The simple answer is, you can’t. That’s because even with the best search engines today, can only search for words in the same language that you enter. Meaning, if you key in English and there will not any results because the videos are in Japanese.

Language is the BIGGEST Search barrier today.

What our “Translated A.I. Video Search” does is that it allows you to search in another language. Meaning, you can search a Japanese video in English (or any other language that you choose) and we will bring you to exactly where this word is said in the video. We can do that in 600 different language pairs.

What this means is that these videos in Japanese are no longer limited by the language barrier and the knowledge within these videos are now made available not just to watch, but also to be search.

Unleash your video library’s true potential by allowing your audience to search your videos in their own languages. In the process, we also automatically create a massive amount of Video SEO in multiple languages, thus, allowing other search engines to index and search your videos!

From the extracted video data, we use various NLP (Natural Language Processing) techniques to transform the unstructured big data into a language that you understand, where you can further analysis, turning data into intelligence.

As of today, our Search Engine has the following languages supported:

  • Speech Recognition - 12 languages
  • Video OCR - 26 languages
  • Documents - 100+ languages
  • Translated Search – 600+ language pairs 

About Babbobox (website: www.babbobox.com)
Babbobox enables organisations to unleash potential value in their digital assets by using A.I. Search. Babbobox developed Four World’s First breakthrough A.I. solutions, including our Unified Search Engine that combines numerous advanced technologies like Speech Recognition, Video OCR, Image Analysis, Artificial Intelligence, Translation and Enterprise Search, etc. We are the only solution today that empowers you with the ability to index and search across all digital assets (documents, images, audio and videos) on a single platform. With the extracted data, we turn it into Unstructured Big Data by analysis using various Natural Language Processing (NLP) techniques.

We transform your unstructured data into intelligence. Made possible by A.I.

To find out more, please CONTACT US

Bringing A.I. Video Search to DC

mstechsummit_babbobox

Super excited about bringing our A.I. Video Search to Washington D.C at the Microsoft Tech Summit. We hope to do Asia and Singapore proud (as it looks like we are the only ones)!

On top of that, we are also super excited about the announcement that we will be making during Tech Summit. We believe it's another WORLD'S FIRST! This will bring the world closer, in terms of knowledge, language and data.

If you think our ability to search 7 video elements (speech, text, objects, motion, faces, emotions, offensive content) awesome...

What we are going to announce next will blow you away! Watch this space!

Babbobox featured on The Record for their World's First

Our launch - "World's First Video Search Engine with Interactive Results" in Birmingham (UK) was picked up by The Record and given some airtime. It feels great to be picked up and be given that bit of recognition for doing what we do to a global audience. 

Click HERE for the article.

Note: The Record is a global magazine featuring the Best of Enterprise Technology on The Microsoft Platform.

Thank you Birmingham... Hello Washington!

Finally, 2 intensive days of MS Tech Summit in Birmingham... done and dusted. Absolutely the right decision to come to UK to do this. Massive event! Exactly the right platform to showcase our Video Search technologies. 

babbobox-ceo-alex-chan
babbobox-ceo-alex-chan-clevertime-joao-penha-lopes
mstechsummit-birmingham

Caught up with Scott Guthrie. Held so many in-depth discussion with so many UK enterprises, universities, government agencies, etc. If we have our way, our stuff might even end up in Scotland Yard! So let's see... 

Good-bye Birmingham... Next stop, Trump-capital Washington in March! I'm excited already...