Connect with us


New startup shows how emotion-detecting AI is intrinsically problematic



In 2019, a team of researchers published a meta-review of studies claiming a person’s emotion can be inferred from their facial movements. They concluded that there’s no evidence emotional state can be predicted from expression – regardless of whether a human or technology is making the determination.

“[Facial expressions] in question are not ‘fingerprints’ or diagnostic displays that reliably and specifically signal particular emotional states regardless of context, person, and culture,” the coauthors wrote. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown.”

Alan Cowen disagrees with this assertion. An ex-Google scientist, he’s the founder of Hume AI, a new research lab and “empathetic AI” company emerging from stealth today. Hume claims to have developed datasets and models that “respond beneficially to cues of [human] emotions,” enabling customers ranging from large tech companies to startups to identify emotions from a person’s facial, vocal, and verbal expressions.

“When I got into the field of emotion science, most people were studying a handful of posed emotional expressions in the lab. I wanted to use data science to understand how people really express emotion out in the world, across demographics and cultures,” Cowen told VentureBeat via email. “With new computational methods, I discovered a new world of subtle and complex emotional behaviors that nobody had documented before, and pretty soon I was publishing in the top journals. That’s when companies began reaching out.”

Hume — which has ten employees and recently raised $5 million in funding — says that it uses “large, experimentally-controlled, culturally diverse” datasets from people spanning North American, Africa, Asia, and South America to train its emotion-recognizing models. But some experts dispute the idea that there’s a scientific foundation for emotion-detecting algorithms, regardless of the data’s representativeness.

Solutions | Hume AI

“The nicest interpretation I have is that these are some very well-intentioned people who, nevertheless, are ignorant enough that … it’s tech causing the problem they’re trying to fix,” Os Keyes, an AI ethics scientist at the University of Washington, told VentureBeat via email. “Their starting product raises serious ethical questions … [It’s clear that they aren’t] thoughtfully treating the problem as a problem to be solved, engaging with it deeply, and considering the possibility [that they aren’t] the first person to think of it.”

Measuring emotion with AI

Hume is one of several companies in the burgeoning “emotional AI” market, which includes HireVue, Entropik Technology, Emteq, Neurodata Labs, Neilson-owned Innerscope, Realeyes, and Eyeris. Entropik claims its technology, which it pitches to brands looking to measure the impact of marketing efforts, can understand emotions “by facial expressions, eye gaze, voice tonality, and brainwaves.” Neurodata developed a product that’s being used by Russian bank Rosbank to gauge the emotion of customers calling in to customer service centers.

It’s not just startups that are investing in emotion AI. In 2016, Apple acquired Emotient, a San Diego firm working on AI algorithms that analyze facial expressions. Amazon’s Alexa apologizes and asks for clarification when it detects frustration in a user’s voice. Speech recognition company Nuance, which Microsoft purchased in April 2021, has demoed a product for cars that analyzes driver emotions from their facial cues. And Affectiva, an MIT Media Lab spin-out that once claimed it could detect anger or frustration in speech in 1.2 seconds, was snatched up by Swedish company Smart Eye in May.

The emotion AI industry is projected to almost double in size from $19 billion in 2020 to $37.1 billion by 2026, according to Markets and Markets. Venture capitalists, eager to get in on the ground floor, have invested a combined tens of millions of dollars in companies like Affectiva, Realeyes, and Hume. As the Financial Times reports, film studios such as Disney and 20th Century Fox are using it to measure reactions to upcoming shows and movies. Meanwhile, marketing firms have tested the technology to see how audiences respond to advertisements for clients like Coca-Cola and Intel.

The problem is that there exist few –if any — universal markers of emotion, putting the accuracy of emotion AI into question. The majority of emotion AI startups base their work on psychologist Paul Ekman’s seven fundamental emotions (happiness, sadness, surprise, fear, anger, disgust, and contempt), which he proposed in the early ’70s. But subsequent research has confirmed the common-sense notion that there are major differences in the way that people from different backgrounds express how they’re feeling.

Factors like context, conditioning, relationality, and culture influence the way people respond to experiences. For example, scowling — often associated with anger — has been found to occur less than 30% of the time on the faces of angry people. The expression supposedly universal for fear is the stereotype for a threat or anger in Malaysia. Ekman himself later showed that there are differences between how American and Japanese students react to violent films, with Japanese students adopting “a completely different set of expressions” if someone else is in the room — particularly an authority figure.

Gender and racial biases are a welldocumented phenomenon in facial analysis algorithms, attributable to imbalances in the datasets used to train the algorithm. Generally speaking, an AI system trained on images of lighter-skinned people will perform poorly on people whose skin tones are unfamiliar to it. This isn’t the only type of bias that can crop up. Retorio, an AI hiring platform, was found to respond differently to the same candidate in different outfits, such as glasses and headscarves. And in a 2020 study from MIT, the Universitat Oberta de Catalunya in Barcelona, and the Universidad Autonoma de Madrid, researchers showed that algorithms could become biased toward certain facial expressions, like smiling, which could reduce their recognition accuracy.

A separate study by researchers at the University of Cambridge and Middle East Technical University found that at least one of the public datasets often used to train emotion AI systems contains far more Caucasian faces than Asian or Black faces. More recent research highlights the consequences, showing that that popular vendors’ emotional analysis products assign more negative emotions to Black men’s faces than white men’s faces.

Voices, too, cover a broad range of characteristics, including those of people with disabilities, conditions like austism, and who speak in other languages and dialects such as African American Vernacular English (AAVE). A native French speaker taking a survey in English might pause or pronounce a word with some uncertainty, which could be misconstrued by an AI system as an emotion marker.

Despite the technical flaws, some companies and governments are readily adopting emotion AI to make high-stakes decisions. Employers are using it to evaluate potential employees by scoring them on empathy or emotional intelligence. Schools have deployed it to monitor students’ engagement in the classroom — and even while they do classwork at home. Emotion AI has also been used to identify “dangerous people and tested at border control stops in the U.S., Hungary, Latvia, and Greece.

Training the algorithms

To mitigate bias, Hume says that it uses “randomized experiments” to gather “a rich array” of expressions — facial and vocal — from “people from a wide range of backgrounds.” According to Cowen, the company has collected more than 1.1 million images and videos of facial expressions from over 30,000 different people in the U.S., China, Venezuela, India, South Africa, and Ethiopia, as well as more than 900,000 audio recordings from over 25,000 people voicing their emotions labeled with people’s self-reported emotional experiences.

Hume’s dataset is smaller than Affectiva’s, which Affectiva once claimed was the largest of its kind with more than 10 million people’s expressions from 87 countries. But Cowen claims that Hume’s data can be used to train models to measure “an extremely wide range of expressions” including over 28 distinct facial expressions and 25 distinct vocal expressions.

Solutions | Hume AI

“As interest in accessing our empathic AI models has increased, we’ve been preparing to ramp up access to them at scale. Thus, we will be launching a developer platform which will provide API documentation and a playground to developers and researchers,” Hume said. “We’re also collecting data and training models for social interaction and conversational data, body language, and multi-modal expressions which we anticipate will just expand use cases and our customer base.”

Beyond Mursion, Hume says it’s working with Hoomano, a startup developing software for “social robots” like Softbank Robotics’ Pepper, to create digital assistants that deliver better recommendations by accounting for users’ emotions. Hume also claims to have partnered with researchers at Mount Sinai and the University of California, San Francisco to see whether its models can pick up on symptoms of depression and schizophrenia “that no previous methods have been able to capture.”

“A person’s emotions broadly influence their behavior, including what they are likely to attend to and click on. Consequently, AI technologies like search engines, social media algorithms, and recommendation systems are already forms of ’emotion AI.’ There’s no avoiding it. So decision-makers need to worry about how these technologies are processing and responding to cues of our emotions and affecting their users’ well-being, unbeknownst to their developers.” Cowen said. “Hume AI is providing the tools needed to ensure that technologies are designed to improve their users’ well-being. Without tools to measure cues to emotion, there’s no way of knowing how an AI system is processing these cues and affecting people’s emotions, and no hope of designing the system to do so in a manner that is consistent with people’s well-being.”

Setting aside the fraught nature of AI to diagnose mental illness, Mike Cook, an AI researcher at Queen Mary University of London, says that the company’s messaging feels “performative” and the discourse suspect. “[T]hey’ve clearly gone to great pains to talk about diversity and inclusion and stuff, and I’m not going to complain that people are making datasets with more geographic diversity. But it feels a bit like it was massaged by a PR agent who knew the recipe for making your company look like it cares,” he said.

Cowen argues that Hume is more carefully considering the applications of emotion AI than competitors by establishing The Hume Initiative, a nonprofit “dedicated to regulation empathic AI.” The Hume Initiative — whose ethics committee includes Taniya Mishra, the former director of AI at Affectiva — has released regulatory guidelines that Hume says it’ll abide by in commercializing its technologies.

The Hume Initiative’s guidelines, a draft of which was shared with VentureBeat, bans applications like manipulation, deception, “optimizing for reduced well-being,” and “unbounded” emotion AI. It also lays out constraints for use cases like platforms and interfaces, health and development, and education, for example requiring educators to ensure that the output of an emotion AI model is used to give constructive — but non-evaluative — feedback.

Coauthors of the guidelines include Danielle Krettek Cobb, the founder of the Google Empathy Lab; Dacher Keltner, a professor of psychology at UC Berkeley; and Ben Bland, who chairs the IEEE committee developing standards for emotion AI.

“The Hume Initiative began by listing all of the known use cases for empathic AI. Then, they voted on the first concrete ethical guidelines. The resulting guidelines are unlike any previous approach to AI ethics in that they are concrete and enforceable. They detail the uses of empathic AI that strengthen humanity’s greatest qualities of belonging, compassion, and well-being, and those that admit of unacceptable risks,” Cowen said. “[T]hose using Hume AI’s data or AI models are required to commit to using them only in compliance with The Hume Initiative’s ethical guidelines, ensuring that any applications that incorporate our technology are designed to improve people’s well-being.”

Reasons for skepticism

Recent history is filled with examples of companies touting their internal AI ethics efforts only to have those efforts fall by the wayside — or prove to be performative and ineffectual. Google infamously dissolved its AI ethics board just one week after forming it. Reports have described Meta’s (formerly Facebook’s) AI ethics team, too, as largely toothless.

It’s often referred to as “ethics washing.” Put simply, ethics washing is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance tech to governments and corporations with the other.

Solutions | Hume AI

In a paper by Trilateral Research, a technology consultancy based in London, the coauthors argue that ethical principles and guidelines do not, by themselves, help practically explore challenging issues such as fairness in emotion AI. These need to be investigated in-depth, they say, to ensure that companies don’t implement systems in opposition to society’s norms and values. “Without a continuous process of questioning what is or may be obvious, of digging behind what seems to be settled, of keeping alive this interrogation, ethics is rendered ineffective,” they wrote. “And thus, the settling of ethics into established norms and principles comes down to its termination.”

Cook sees flaws in The Hume Initiative’s guidelines as written, particularly in its use of nebulous language. “A lot of the guidelines feel performatively phrased — if you believe manipulating the user is bad, then you’ll see the guidelines and go, ‘Yes, I won’t do that.’ And if you don’t care, you’ll read the guidelines and go, ‘Yes, I can justify this,’” he said. “[For example, they] wouldn’t actually preclude using their tools to drive users into pointless engagement loops, as long as (1) the developers are aware of the design choice (which they are) and (2) it is framed as ‘user wellbeing,’ which is something firms like Twitter already frame engagement as (people must enjoy the app or they’d stop using it).”

Cowen stands by the belief that Hume is “open[ing] the door to optimize AI for individual and societal well-being” rather than short-term business goals like user engagement. “We don’t have any true competitors because the other AI models available to measure cues of emotion are very limited. They focus on a very narrow range of facial expressions, completely ignore the voice, and have problematic demographic biases. These biases are woven into the data that AI systems are usually trained on. On top of that, no other company has concrete ethical guidelines for the use of empathic AI,” he said. “We are creating a platform that centralizes the deployment of our models and offers users more control over how their data is used.”

But guidelines or no, policymakers have already begun to curtail the use of emotion AI technologies. The New York City Council recently passed a rule requiring employers to inform candidates when they’re being assessed by AI — and to audit the algorithms every year. An Illinois law requires consent from candidates for analysis of video footage, and Maryland has banned the use of facial analysis altogether.

Some vendors have proactively stopped offering or placed guardrails around their emotion AI services. HireVue announced that it’d stop using visual analysis in its algorithms. And Microsoft, which initially claimed its sentiment-detecting Face API could detect expressions across cultures, now notes in a disclaimer that “facial expressions alone do not represent the internal states of people.”

As for Hume, Cook’s read is that the company “made some ethics documents so people don’t worry about what they’re doing.”

“Their ethics guidelines wouldn’t preclude, say, the CIA contracting their emotional tools in order to manipulate or torture people,” Cook said. “[But perhaps] the biggest issue I have is I can’t tell what they’re doing. The part that’s public … doesn’t seem to have anything on it apart from some datasets they made.”

Source link


Experts weigh in on how ONDC is set to transform the ecommerce business landscape in India



The Open Network For Digital Commerce (ONDC) is not a platform. A platform suggests that to transact, everyone has to be in a closed loop. ONDC by design is a network of networks. So it is completely open access, it’s unbundled and interoperable,clarified Thampy Koshy, CEO, ONDC, dispelling some of the issues in the minds of businesses around ONDC.

Thampy highlighted that ONDC is not a platform but a list of protocols enabling the exchange of products and services. There is no central platform; it only allows multiple platforms to talk to each other. Anybody who has a product or service to sell can make it available in the open network that any smart buying platform can access. Thampy was speaking at a panel discussion titled ‘The network effect: ONDC & India’s ecommerce’ at TechSparks 2022. Other panelists included Anjali Bansal, Founder, Avaana Capital & Steering Committee Member, ONDC and Kumar P Saha, Founder, ndhgo.

The interoperable network has started its beta testing process with categories like groceries, food and beverages for small retailers in Bengaluru. ONDC plans to add home and kitchen, agriculture, fashion, apparel, footwear and accessories across India by the next few months. The ONDC pilot is currently up in about 80-85 cities.

Opportunities for smaller sellers

As a buyer or as a seller, it’s your choice as an individual or a small business or a large business, who you choose to transact with. Think UPI, but UPI with physical goods and services. So of course, there is a much higher degree of complexity,said Anjali.

The network, according to her, is meant to be a public good. It’s meant for many entrepreneurs and founders, just like how UPI generated a whole set of new business models, ONDC, she hopes will generate a similar set of new business models that will create enormous consumer value, shareholder value and eventually national value.

Kumar believes ONDC will not just be restricted to the MSMEs. I think it will be equally big for the enterprises and large businesses. And this is just the beginning, he said.

Hyperlocal and kirana is where we have started with because I think that has a very immediate consumer effect and visibility. But the full power of the network of ONDC absolutely applies to large enterprises to mid-sized enterprises and consumers and kiranas,said Anjali.

Ensuring consumer trust in the network

Any network participant who wants to be a part of ONDC, Thampy outlined, would have to first undergo a certain set of due diligence – who they are, what kind of business they are in, what’s their credibility and so on, so forth. Secondly, they have to ensure that their IT systems are certified by the team who’s developing the entire protocol. Third is that they define a network participant agreement, which is common for everybody. It’s not a negotiated agreement, whether you’re a large entity like a bank or a financial institution, or a small startup – all views on the same network agreement which binds you to a certain behaviour in the network.

And this network agreement will also include as its part the network policies as existing today, and as they evolve from time to time. These policies are essentially how they have to behave among themselves and with the outside world. And this is also digitally trackable, so it’s all part of the protocol itself. Failure to adhere to the policies will lead to penalties and suspensions.

While you are in a network, whether you’re a seller platform or a buyer platform, your performance in the network is continuously tracked and rated, and it’s available to the whole world. Once you are there as a business and have established your credibility with your GSTN number, pan number and so on, you are a tracked person to the community as a whole,added Thampy.

Most people come into business to do business in a sustainable way, said Kumar. You live and die by your reputation. So if you build a reputation for good quality, you’ll get lots more business, and if you develop a bad reputation for bad quality, you don’t get business,he added.

Setting a global example

Ecommerce is a global problem, where everybody is trying to find a solution. For instance, the American economy is trying to find solutions over regulation by developing the American Innovation and Choice Online Act. Similarly, Europe is trying to ensure fair and open digital markets with the Digital Markets Act. But India is showing a global example,said Thampy.

India is trying to use technology and markets with enabling policies, which is a truly democratic method. India showing how to use the market to the big capitalists in the world is a fantastic global example,he concluded.

Techsparks 2022

Source link

Continue Reading


Kenya’s Uncover raises $1M to expand skincare product enterprise across Africa • TechCrunch



Africa’s beauty and personal care market is growing accelerated by its growing young and fashion conscious population, increasing spending power, and urbanization. The market’s potential has in recent years attracted major brands, with Fenty Beauty by Rihanna and LVMH being the latest entrants.

Niche local brands are also emerging to offer tailored beauty and skin care products. Kenya-based Uncover Skincare is one of them and it seeks to revolutionize the sector through data-led manufacturing that is aligned with the needs of the modern African woman.

Backed by a $1 million seed funding, Uncover is scaling its operations in Kenya and expanding to Nigeria in January. This is after recently introducing a new range of skin products in the market, with plans to launch more next year. Its products are sold through its online platform, on marketplaces, and in the stores of partner brands.

“We are using the funding to launch more products, go into additional markets, and also double down on our tech and data to effectively produce, reach and market to our audience,” Uncover co-founder and CEO, Sneha Mehta told TechCrunch.

FirstCheck Africa, Samata Capital, Future Africa, IgniteXL participated in the round, in addition angel investors ex-SokoWatch COO, Kwenhui Tawah, and ex-L’Oreal executive and current WPP Scangroup CEO, Patricia Ithau. The new funding brings the total amount raised by Uncover, since launch in 2020, to $1.225 million.

Uncover is scaling its operations in Kenya and expanding to Nigeria in January. This is after recently introducing a new range of skin products in the market, with plans to launch more next year. Image Credits: Uncover Skincare

Mehta co-founded Uncover with Jade Oyateru (COO) and Catherine Lee (Advisor) inspired to build a data driven, digital-first, health and wellness brand for the African woman, by leveraging their experience and expertise.

Mehta has over 10 years’ experience helping businesses scale across Africa, while Oyateru is a nutritionist and consumer goods expert. Lee is an economist turned filmmaker.

Uncover was launched after incubation at Antler. It uses African botanicals and outsources its manufacturing to Korean original design manufacturers, who they say ensures its products are “healthy, safe, affordable and effective.”

“Our production happens in Korea (one of the world’s biggest beauty markets) , where we are leveraging the best technology, labs, and scientists in the world who understand stability testing, safe ingredients, and formulations. We are able to deliver because women in our community have graciously provided information and tried our products, to help us formulate specifically for this market,” said Mehta.

The startup also offers virtual consultations through an in-house esthetician, and produces skin-tertainment content to reach more users, and recently introduced a skin quiz for personalized recommendations.

“I have experienced the lack of safety in products firsthand, the lack of information and the feeling of being stuck. This is part of the reason why we are building these tools for people to get personalized information, and advice including diet tips.”

Mehta says since launch the startup’s revenue has grown 20-fold, buoyed by the growing demand for its products, and the community it continues to build.

“We have had incredible traction since, and our community has grown from zero to about 60,000 women in Kenya in two years… we have built brand awareness, loyalty, and our values of education and knowledge and empowerment have been established at the market,” she said.

Uncover hopes to continue building and strengthening this community, starting with Kenya and Nigeria, which are the next major beauty and personal-brand markets in the continent after South Africa.


Source link

Continue Reading


Experts deliberate on technologies leading to the rise of gaming and content in India



Technology is seeping into every aspect of the world and online gaming is no stranger to it. Over the years, online gaming and esports have been through a lot of changes and today this industry is more advanced and progressive. Technology has enabled a variety of changes which is why online gaming continues to grow in popularity.

To discuss these new-age technologies in depth and how they are changing the gaming landscape, a panel discussion was held on Playing to the fantasy: Rise of gaming & content in India at TechSparks 2022 featuring Gaurav Barman, Senior Business Development Manager, AWS; Vinayak Shrivastav Co-founder and CEO, VideoVerse; Ranga Jagannath, Senior Director – Growth, Agora; and Ratheesh Mallaya, Director of Products, Zynga.

Here are some of the key highlights from the discussion:

Tech enabling the growth of esports in India

The panel discussion started with understanding the rise of esports gaming in India. Despite being around for more than a decade, it’s only recently seen a boom in popularity. The current size of the Indian esports industry is Rs 250 crore and the forecast for the compound annual growth rate (CAGR) is expected to be 46 percent in the next four years. The esports industry is expected to see a growth of four-folds estimated to be Rs 1,100 crore by 2025.

Technology is a major propelling force that’s driving this rise. Gaurav of AWS shed more light by discussing a few of the technologies that AWS provides that help in building more interactive engagement for esports and gaming platforms.

Esports companies in India can build engagement, which is much more interactive by offering players the ability to communicate with each other beyond linguistic or geographical boundaries. This can be done by providing multilingual, real-time, translation across geographies. Companies can also build real-time recommendation systems in terms of feed that the user sees, said Gaurav.

Vinayak of VideoVerse spoke about how technology that aids in the production of short-form content is going to play a key role in driving the popularity of esports.I think what’s important for all of us to see is that e-gaming as an entire market is just continuously changing. It’s going to continuously keep evolving over the next couple of years, he said. In such a scenario, Vinayak believes that the services that VideoVerse provides with their flagship product Magnifi will play an important role in amplifying the entire ecosystem.

Magnifi uses state-of-the-art AI and ML technology to auto-produce key moments and highlights from live matches within seconds. Such kind of short-form content is what Vinayak feels is the need of the hour and will drive the growth of the esports market as well.

Hits and misses in the industry

The panelists further deliberated what has been working well for the gaming industry and what has tanked completely. Ranga of Agora spoke in detail about real-time engagement and how greatly it has benefited the gaming landscape.

What we’ve seen is that apps and games which have embedded technologies that are truly real-time tend to be able to monetise much better and significantly more, as compared to games that either don’t have real-time engagement, or they have laggy real-time engagement. Games that have real-time engagement also tend to be more active with better user retention, he remarked.

He further explained that it doesn’t just stop at real-time engagement, but the ability for gaming companies to analyse what’s happening in that real-time engagement is what is working in their favour.

While it’s important to know what is working for the esports landscape, it’s even more important to understand what’s not. Ratheesh shared some pearls of wisdom from some of the failures that Zynga has faced.

When you’re looking to build local, there is definitely a big opportunity out there. But that has to be on top of a really strong core that is fun and engaging for the users. We launched a game around the time of Independence Day in India based on a match game, but it did not turn out the way we wanted it to because of this reason,he said.

Ratheesh highlighted that there is a great scope for games with Indian IP and in fact, according to a recent report, about 60 percent of the audience that doesn’t play games have said they will play if there is an Indian IP. But just building a game on something vernacular or Indian IP will not work out. He also pointed out how games that are currently top-grossing like Garena Free Fire, Coinmaster, and Candy Crush all have great visuals and quality and that’s what is enticing users to stay hooked on the game.

Talking about other hits, Gaurav emphasised how Web3 technologies and blockchain will hugely benefit the industry. Gaming companies are now looking at making digital assets interoperable and with the advent of the Metaverse, an entire make-believe world is possible where players can socialise, connect, and share content beyond the scope of gaming.

From my perspective, technology is going to play a pivotal role in the evolution of this industry. Be it blockchain, NFT, or metaverse, all of that will come together as a platform where interoperability is enabled through underlying technology and used to build these solutions at this point in time, he said.

Along with Web3 technologies, Vinayak shared how cloud-based video editing and streaming solutions will become pivotal for the overall growth of the ecosystem as they’ll make broadcasting, editing, and collaborating with peers in the industry much easier.

Microtransactions in the gaming industry

The panel ended by discussing microtransactions in the gaming industry where Ratheesh shared some useful insights on how transactions and in-app purchases have to be tailored according to the genre of the game. There are different monetisation strategies like subscription-based model, battle pass kind of monetisation strategy or an impulse buy. Those are all options available to you. But what strategy you deploy depends completely on the genre of the game, he shared. He also suggested that microtransactions on gaming apps must be personalised to the users’ needs and that they must be pivotal in framing up the monetisation strategy for any gaming app.


Source link

Continue Reading