Megan Gell
Apr 23, 2018

How it really works... facial recognition

The chief design officer at Kairos takes us through the intricacies of facial recognition at events.

How it really works... facial recognition

Facial recognition (FR) companies have tended to focus on the security vertical due to its surveillance and security capabilities, but with the launch of the FR-enabled iPhone 8 last year, a host of other applications have gained momentum.

Kairos is a venture-backed FR start-up founded in Miami in 2012. It now provides FR software solutions for companies all over the world, from other start-ups to Fortune 50 giants.

How can FR be useful for events?

We specialise in three areas: identity, emotion analysis and demographics. Identity is the more traditional aspect of verifying or authenticating someone such as at event registration or protecting that back-of-house environment with regards to access.

We also work with companies like cruise ships and theme parks that have large volumes of image and video that they're looking to search, organise and categorise primarily for the people in that content.

For emotion, we're able to read the emotion of people's faces – their facial expressions and the movement of muscles. We measure those movements in real-time and provide that back as a data point. This allows companies to see how people are engaging with their content or the sentiment of people lining up to register at an event.

That informs things like customer satisfaction, loyalty, whether they happy, and the general feelings towards an attraction at an event.

We're also able to measure demographic information such as age, gender and ethnicity, and that's useful for personalising digital displays, automating and gathering insights around a large group of people that perhaps you wouldn't be able to do in a traditional way.

If you think about digital signage, it gives companies the ability to customise and personalise those messages based on who's there at any given moment. It's companies bringing the technology into the event space as much as the event providers having the technology.

How does it work?

We package the data through software development tools called API (application program interface) and SDKs (software development kits), which basically means any company can integrate our technology into their software applications. They can embed it into a mobile app, they can run it on their servers.

So a customer – by way of their technical team – is able to submit content such as images and videos via their API into our platform. Our platform analyses it and then the API sends back a result. That could be something like verifying that two photos are the same person or if you send in a video, we would send back emotional data about the faces that we find [in every frame].

It's a way of simplifying the communication between two things, done over the internet so clients are submitting things remotely from wherever they are. It could be from a device, it could be from a Dropbox folder – it doesn't really matter. They have an encrypted key, which allows them to access our technology, and then we send it back in an encrypted way as well. It doesn't rely on any one programming language, so it doesn't matter if you built your application in PHP or in C++.

On the SDK side, the features are the same but just in a more contained environment. A lot of our larger customers want to control access to their data, they’re not sending it outside of their network. They’re able to put our technology inside their file.

So they're taking advantage of all our features, but in their environment. They can put it on a cell phone, they could build a mobile app and it would be self-contained on that device. That also improves speed, you're approaching near real-time because you’re doing all the processing on the device versus sending it out over the internet.

What about concerns from the market?

The history of FR and its connection to surveillance and security as well as some of other things bubbling up around AI cause some concern, but the idea of how these algorithms are taught to learn – their requirement is data. If the data they're learning from is corrupted or is not labelled correctly or not diverse enough, misidentifying somebody because of their characteristics such as ethnicity or gender is a big problem.

Understanding those challenges and how to overcome them is what will drive the success of the technology, not necessarily a particular type of feature or even how accurate it is. The bigger conversation around 'Is the technology really serving people in a respectful and trustful way?’, that’s what we think about. It’s a big topic.

So how do you ensure privacy and data security?

First of all, we don't work with a company that's looking to exploit people. That's a core value of ours. Beyond that, we anonymise all of the facial data we collect.

When we find a face and an image, we're actually finding the mathematical points of a face, the geometry of a face – we don't know who that person is, we don't know any of their personal information we're simply extracting the maths from the proportions of the face. That gets taken, the image gets forgotten and that mathematical composite is then encrypted and put into a unique code that cannot be reverse engineered.

So in the interests of verifying 'That is Person X or not', we don't actually know if it's 'Person X', we're just comparing one set of coordinates. On the emotion and demographic side of things, it works in exactly the same way – we’re just collecting the anonymous data points. That's been a very deliberate, well thought-out process for us and we continue to add to that as we go along.

Are attitudes changing?

In the last 12-18 months we've seen quite an explosive growth with regards to demand. The emergence of AI in the business mainstream has been a real driver for our technology, which actually fits in a bigger “genre” around computer vision – where machines are able to see the world and understand it. So outside of the human face, they’re able to detect objects, scenes and motion.

Also, in recent years – and looking forward four or five years – there is an enormous growth in the number of cameras in the world. By 2022, reports estimate around 45 billion embedded cameras will be in existence. That's cameras in your cell phone, laptop, cars and even fridges. Cameras will become more and more prevalent in all of our technology as it becomes more connected for the reason of trying to understand the world.

How much does it cost?

At an entry level, any developer can sign up and start using the API for free. They can experience some of the features and start testing very, very quickly. Somebody who knows what they're doing can get going in two minutes. That's free to a limit.

Beyond that, we have a number of tiers. We publish our pricing on our website so it’s transparent.
We have some pricing plans and they incrementally increase with regards to how much usage you can take advantage of, how much support we give and other factors.

So we go from free up to about US$500-$5,000 per month, and beyond that, we're seeing deals from low six figures into the mid seven figures. It really depends on the size of the customer and what they're trying to achieve.

Source:
CEI

Related Articles

Just Published

1 day ago

Publicis climbs the highest in APAC media rankings ...

PHD retains the overall lead, as Omnicom Media Group sees an end-of-year boost from Tata Motors' win, and Publicis Media rockets to the sixth spot.

2 days ago

Netflix is going all out for Squid Game season ...

With a Golden Globe nomination secured even before its release, the record-breaking series returns on December 26, backed by Netflix’s boldest marketing push yet.