Connect with us

Startups

Surge Unveils 20 More Startups It Is Backing To Make The Big Time

Published

on

Three year almost to the day after launching Surge, Sequoia Capital India is today unveiling the sixth cohort of its scale-up program for startups based in India and the wider Asia Pacific region. The program will feature 20 new technology-enabled start-ups from across the region, ranging from a Bangladeshi educationalist to an agriculture specialist in India.

“This region is booming,” says Rajan Anandan, managing director of both Surge and Sequoia India. “The base of innovation just continues to accelerate and the quality of our entrepreneurs keeps improving.”

It is what Sequoia hoped to see when it first launched Surge in 2019. “We were determined to address some of the constraints that were holding Indian entrepreneurs back,” Anandan recalls. “In particular, it was very difficult for early-stage start-up businesses to raise meaningful amounts of capital, so founders were spending all their time fund-raising, and there was a real shortage of the type of mentoring those founders needed.”

Anandan and his colleagues could see the potential of start-ups in the region, but were frustrated so many of them did not seem to be achieving what was possible because of these issues. Surge was launched to close some of the gaps.

Businesses securing access to the programme typically receive investment of $1m-$2m directly from Sequoia – and often funding from co-investors. This gives founders the runway they need to concentrate on scaling the business to the stage where it is ready to raise Series A finance.

In addition, each Surge participant gets direct mentoring from Sequoia’s network of established entrepreneurs. They take part in a 16-week training and development course designed to build the skills and knowledge required to move from start-up to successful scale-up business. Advice and support is also available from the community of start-ups in the program – and those that have already been through it.

The recipe has proved remarkably successful. Including the cohort announced today, Surge has supported 246 founders at 112 start-ups across 15 sectors. Many former cohort members have already gone on to raise additional funding, with 45 companies from the first four groups of participants having unveiled support.

Sequoia actively supports those fund-raisings with bi-annual “Upsurge” initiatives, through which it introduces firms to prolific Series A investors.

Competition to get into Surge is tough – Anandan says the program has now run its slide rule over more than 10,000 potential entrants. “We’re effectively working with the very best of the start-ups in our region,” he says. “We invest in less than 1% of these businesses, though we are also committed to supporting entrepreneurship more widely and we publish a great deal of content and advice.”

This year’s entrants have a great deal to live up to, but the evolution of the Surge cohorts tells its own tale of how entrepreneurship across the Asia Pacific region is evolving at pace.

“This is the most geographically dispersed portfolio of companies we’ve picked so far,” Anandan says. It includes Surge’s first Malaysian company, as well as the first constituents from Thailand and Taiwan.

“Another striking feature of the cohort is how many of them are building a business aimed at the whole world, rather than just their home market,” Anandan adds. “These companies are born global.” At least 13 of the 20 companies picked this year can be described in this way.

In part, that reflects the large number of software-as-a-service businesses that have made this year’s selection. The SaaS business model effectively enables a company to sell in any market where the problem it solves is relevant.

That is part of the selection process, explains Anandan. “We’re looking for awesome founders who are solving a genuine problem,” he says. “But that problem also has to be large enough.”

India’s Aggromalin is one good example. It has built a technology platform with the potential to help millions of farmers diversify into animal husbandry and aquaculture in order to increase their income. Australia’s Checkbox is another: its no-code automation platform enables business users to develop their own tools and software using intuitive drag and drop techniques.

Not every cohort member will achieve its ambitions, Anandan concedes – that is the nature of backing start-ups. But with access to Sequoia’s capital, and a broader range of support services, their chances should improve. “It’s all about equipping founders with the ability they need to make the right decisions at an early stage – and ultimately improving the odds of their success.”

The full detail of Surge’s cohort members can be found here.

Startups

Why graphic novels are lucrative IP for Web3: From MEFaverse to metaverse

Published

on

Marvel’s multi-billion dollar IP enterprise is eating up the film and streaming market — but the metaverse is offering new opportunities and creating a whole new market.

Marvel is valued at nearly $6 billion for films alone, $40 billion for streaming and about $3 billion for consumer products, according to a 2021 Forbes analysis. While the media giant dominates the lion’s share of graphic novel IP in entertainment within film and streaming, the metaverse offers new opportunities for graphic novel IP. The ‘metaverse in entertainment’ market share is expected to increase to $28.92 billion by 2026. 

The entertainment market is essentially expanding with the creation of the metaverse, therefore presenting opportunities to replicate the lucrative success that Marvel has enjoyed. But what made Marvel so popular, and why is the multiverse primed for the metaverse? 

Since the inception of the metaverse as a concept, some of the earliest explorations have included the creation — and adaptation of — graphic novels for this new virtual environment. From Method Man’s comic book MEFaverse, to the adaptation of Dan LuVisi’s iconic Last Man Standing: Killbook of a Bounty Hunter, to Killtopia catering to Japan’s ‘Otaku’ community of manga and animé fans.

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

But why is graphic novel IP so attractive to directors writing for a digital medium with interactive audiences? And what opportunities are potentially being left on the table? To understand the attraction of graphic novel IP, we only need to look at the formula of success that Marvel and DC have built. 

An ever-expanding world

Marvel’s IP is not one story, but a universe that continues to expand. Recent editions to Marvel’s onscreen world include She-Hulk: Attorney at Law, Ms. Marvel and the upcoming Secret Invasion. The stories that come to life in film and TV are often based on specific heroes within that universe — or, more aptly, the multiverse.

In film, appearance-altering costumes, special FX make-up and visual FX (VFX) enable directors to cast different actors to play the same character in the franchise. The most popular and talented actors, with the strongest following in the target demographic for the box office, can have their turn playing the hero. In fact, actors no longer need to sign long-haul multi-movie contracts with Marvel.

The metaverse offers even more creative diversity. Graphic novel characters can be customizable according to the themes of different concept artists, and the same character can travel through a manga world into one that’s photorealistic. Perhaps a good interpretation is Dr. Strange’s journey through the multiverses, as we see him enter a variety of differently stylized worlds until he eventually finds himself surreally realized as a colorful gelatinous shape. 

One of the key differentiators between a virtual world and a game within the metaverse — or what will be the metaverse — is this interoperability, the way in which an avatar could be used in different virtual worlds. The way avatars are translated stylistically in those different worlds is a key focus for metaverse builders. And it’s something Marvel has been doing well for some time. People love the graphic novel style of Marvel films and how they not only pay homage to the original art form but also amplify the movie experience with state-of-the-art VFX. 

For example, LMS: Killbook of a Bounty Hunter is being translated for the metaverse after amassing a core fanbase. LMS is simultaneously a scrapbook-style graphic novel, a character bible for the anti-hero Gabriel and an introduction to the colorful yet deadly world of ‘New Amerika’. Initially released as a series of artworks, LMS soon gathered a solid fanbase that demanded more of Dan LuVisi’s world. The rights to LMS were bought by Section 9, which approached metaverse-as-a-service company Sequin AR with the idea of creating an LMS metaverse. With a rich world and a pre-existing community, Sequin believed LMS was the perfect property for a metaverse environment. 

The attractiveness of graphic novel IP

Sequin AR’s CEO Rob DeFranco explains why the graphic novel IP was so attractive: “The world that Dan created is vivid, imaginative, and full of pop-culture references with a sharp satirical tone that makes it a model property for the metaverse. There is a big community already in place for LMS. For example, a Comic-Con special edition toy of Gabriel, created by the popular brand Funko, sold out on the first day of the convention. Since the book first launched 10 years ago, there has been a cultural shift in how we interact with the properties we love.” 

Graphic novels rely on captivating imagery, along with compelling stories. The community building the metaverse is a blend of creatives, technologists and storytellers, similar to the teams that produce the Marvel universe. For example, the team behind Method Man’s MEFaverse includes Method Man himself, and renowned graphics artist Jonathan Winbush of Winbush Immersive, with Xsens motion tracking technology helping them translate real-life movement into the digital world. It’s no coincidence that Winbush built his own brand as a creator from his time working at Marvel. 

“The trajectory of the NFT/Web3 space as a whole, in my opinion, only has one direction to go: up,” says Method Man. “I see no reason why it wouldn’t, as brands and individuals realize the unique opportunities and potential this space offers, as well as the utility it provides. That said, my hope is that it can continue to grow while remaining mindful of values such as inclusivity and positivity, which are both pillars of the MEFaverse community.”

The metaverse and the story of good vs. evil 

The metaverse has the potential to be many things, good or bad. Most metaverse evangelists also acknowledge how human influence tends to invade — and sometimes spoil — the utopian promise of future technology.

For example, Aragorn Meulendijks, Chief Metaverse Officer (CMO) from Your Open Metaverse, a distributed metaverse for streaming Web3 content, recently shared his candid thoughts on Elaine Pringle Schwitter’s HeadsTalk Podcast. According to Meulendijks, the mission for those building the metaverse needs to align with the reality of flawed human nature. This sentiment is omnipresent in Marvel; the premise of superhero films is that good and evil always exist in tandem, and even heroes are flawed. 

While there are inevitable flaws, the multiverse can also be employed altruistically. Representation and connection are frequent themes in graphic novels, often speaking to those who don’t feel part of mainstream pop culture. This links back to Winbush’s work on the MEFaverse.

“We wanted to create more ‘metamasks’ or PFPs with different traits to represent our community,” he explained. “Method Man’s motivation in creating the MEFaverse was to show his fans their powers, the unique traits that make them who they are but in the superhero realm. Method Man wanted everyone that was excited about the MEFaverse to have a mask that truly represents them. He wanted his community to be shown their unique powers in a superhero realm.”

The building blocks of film production are being used to build the metaverse

The technology that underpins movie production is driving metaverse creation. For example, motion capture is harnessing and translating movement to avatars, while Unreal Engine is being used to create the worlds themselves.

Charles Borland, founder of real-time studio Voltaku explained: “When I was an actor in a video game called Grand Theft Auto IV, I would spend a lot of time in a mocap suit, and I’d been on a lot of TV and film shoots and saw just how inefficient the Hollywood production process is. I remember thinking, holy cow, when this technology and the economics get to a certain point, all of this gaming technology and real-time technology is going to revolutionize filmmaking and how you make content.” 

Talking about the use of technology in Killtopia, Charles elaborated: “If we’re going to build this in a game engine, like Unreal Engine, then we [had]to do things like set up a camera inside of Unreal. We knew we were going to have an actress and we were going to try and do this in real-time, but one of the things we were looking at was real-time ray tracing, and to push the envelope on that. We couldn’t go into the studio and do full camera tracking, so we wanted to find something inertia-based. Using the Xsens suit, capturing the raw mocap data, enabled us to create the avatars”. 

From an investment standpoint, how Marvel’s magic formula for success translates to the metaverse is clear. But IP in the metaverse goes far beyond a franchise of characters. Fans build on these worlds themselves, becoming creators in their own right. And in order to create, they need to feel invested. And that’s where the technology underpinning interoperability is key.

Blockchain blockbusters

Killtopia’s Charles Borland explains: “To invest in interoperability, stakeholders and project owners need to know that the assets for whom they’re building aren’t going anywhere. Of course, that’s if by ‘decentralized,’ you mean you’re applying blockchain. What’s great about that is it’s immutable and it’s public. So I know if I build around a project, even if it tanks, my pipeline will stay. Because the things I’ve been referencing and looking at are going to stay online in this decentralized file hosting system, which is great.”

This is an example of how the technology used in metaverse creation is improving the entire production pipeline. Accelerating the content production workflow, and safeguarding the assets for future use, is a challenge even Marvel faces. 

Cultural shift between content creators and consumers

Borland highlights the cultural shift in how we interact with the properties we love. COVID-19 drove the rapid acceleration in digital experiences, helping us to forge genuine connections when real-life interaction wasn’t possible. The convergence of these behavioral changes and technology advancements is now paving the way for the future metaverse, with mixed reality live performances — which became more prevalent during the recent pandemic — offering a hint of what we might expect. 

Brett Ineson, founder of Animatrik Film Design, which has hosted mixed reality performances for Justin Bieber, Pentakill with Wave XR and even virtual circuses with Shocap Entertainment, says: “Nailing the look and feel of a world will be paramount to delivering the illusion of reality, and that’s where capture technology will come into play. Motion capture will be essential for creating lifelike animation for characters and creatures in these virtual worlds so that players feel like they are interacting with real beings.”

Technologists and storytellers are helping to unleash the potential of new IP into the metaverse. Right now, the reality is that the metaverse does not exist, but it represents the next step in immersive and engaging entertainment. The more engaged a community is, the more invested it is in the story. Powered motion tracking, performance capture, interoperable avatars, virtual worlds and hip hop artists-turned-super heroes, the metaverse is prime real estate for the next Marvel enterprise. 

Rob DeFranco is CEO of Sequin AR.

Brett Ineson is cofounder of Animatrik Film Studios.

Remco Sikkema is senior marketing communications manager at Movella and Xsens.

Source link

Continue Reading

Startups

Fortnite Chapter 4 debuts with Unreal Engine 5.1

Published

on

Fornite Battle Royale Chapter 4 arrived today and it makes use of Unreal Engine 5.1, Epic Games announced.

The debut shows how tightly Epic Games ties its overall strategy together. Fortnite is the prime revenue generator for the company, reaching tens of millions of players who buy in-game items. And Unreal Engine is the game developer tool that makes the advances in Chapter 4 available. To sell developers on the engine, Epic eats its own dog food by building Fortnite with Unreal to showcase what it can do.

Unreal Engine 5.1 provides new features that make the game look and run better. Unreal Engine 5 itself debuted earlier this year and it Unreal Engine 5 ushers in a generational leap in visual fidelity, bringing a new level of detail to game worlds like the Battle Royale Island.

Shadows and lighting are better in Fortnite with Unreal Engine 5.1.

Next-gen Unreal Engine 5 features such as Nanite, Lumen, Virtual Shadow Maps, and Temporal Super Resolution — all features that can make Fortnite Battle Royale shine on next-generation systems such as PlayStation 5, Xbox Series X|S, PC, and cloud gaming.

Epic Games said that over half of all announced next-gen games are being created with Unreal Engine. And it said developers can now take advantage of updates to the Lumen dynamic global illumination and reflections system. This is important stuff if you’re a game developer, or you’re expecting to build the metaverse.

Epic has made updates to the Nanite virtualized micropolygon geometry system, and virtual shadow maps that lay the groundwork for games and experiences running at 60 frames per second (fps) on next-gen consoles and capable PCs. These improvements will enable fast-paced competition and detailed simulations without latency, Epic said.

Additionally, Nanite has also added a programmable rasterizer to allow for material-driven animations and deformations via world position offset, as well as opacity masks. This development paves the way for artists to use Nanite to program specific objects’ behavior, for example Nanite-based foliage with leaves blowing in the wind.

Nanite provides highly-detailed architectural geometry. Specifically, buildings are rendered from millions of polygons in real time, and each brick, stone, wood plank, and wall trim is modeled. Natural landscapes are highly-detailed too. Individual trees have around 300,000 polygons, and each stone, flower, and blade of grass is modeled.

On top of that, Lumen reflections provide high-quality ray traced reflections on glossy materials and water.

Water and shadows look prettier in Fortnite Battle Royale Chapter 4.

Also, Lumen provides real-time global illumination at 60 frames per second (FPS). You’ll see beautiful interior spaces with bounce lighting, plus characters reacting to the lighting of their surroundings. (For example, red rugs may bounce red light onto your outfit.) Also, Outfits that have emissive (a.k.a. glowing) qualities will scatter light on nearby objects and surfaces.

Virtual Shadow Maps allow for highly detailed shadowing. Each brick, leaf, and modeled detail will cast a shadow, and character self-shadowing is extremely accurate. This means that things like hats and other small details on characters will also cast shadows.

Temporal Super Resolution is an upgrade over Temporal Anti-Aliasing in Fortnite, and allows for high-quality visuals at a high framerate.

With the introduction of these UE5 features in Fortnite Battle Royale, Fortnite’s Video settings have changed on PC. You can see them here.

To run Nanite, the minimum hardware requirements are Nvidia Maxwell-generation cards or newer or AMD GCN-generation cards or newer.

For Nanite, Lumen, Virtual Shadow Maps, and Temporal Super Resolution to be available in Fortnite on your PlayStation 5 or Xbox Series X|S, make sure the “120 FPS Mode” setting (in the “graphics” section of the Video settings) is set to off.

Unreal’s reach has grown well beyond games. Unreal Engine has now been used on over 425 film and TV productions, and is integrated into over 300 virtual production stages worldwide. Unreal Engine usage in animation has grown exponentially, from 15 productions between 2015 and 2019 to over 160 productions from 2020 to 2022.

Source link

Continue Reading

Startups

Is It Time To Talk About A More Sustainable Approach To Serving Our Customers?

Published

on

At a recent event, I spoke to a Chief Technology Officer (CTO) about how it was not untypical for him to have a day of 14 back-to-back half-hour meetings. He explained that this started during the early part of the pandemic, and by 4 pm, he was absolutely exhausted and struggled to stay focused and pay attention. He added, however, that over time he got used to such a heavy schedule and was able to manage his energy and concentration better.

On hearing this story, I commented that while I often hear stories like this from all sorts of executives at different firms, I am often left wondering how folks end up doing any work if they are in back-to-back meetings all day.

I asked slightly tongue-in-cheek how we had gotten to his point, given that I’d never seen a job description that contained any objective that required a person to attend as many meetings as physically possible.

This raised a few smiles and quite a few nods.

Whilst my comment was playful, it also contained a serious point and one that I have made to many executives about how they should actively manage their time to create the space necessary to really think about and understand the challenges they are facing.

I was thinking about that conversation again the other day when I came across some research from Microsoft about the impact on our brains and emotional state when we have back-to-back meetings.

Using an electroencephalography [EEG] cap, the Microsoft research team were able to monitor the electrical activity in the brain of back-to-back meeting participants. Unsurprisingly, they found that back-to-back virtual meetings are stressful, and a series of meetings can decrease your ability to focus and engage.

However, the research also found that introducing short breaks between meetings to allow people to move, stretch, gather their thoughts or grab a glass of water can help reduce the cumulative buildup of stress across a series of meetings.

That’s really useful insight, and I hope that more executives and their teams embrace the introduction of these short breaks between meetings to reduce stress, support well-being and maintain attention levels.

But I’ve also been thinking about whether these research findings have a broader application.

Specifically, I’ve been thinking about whether the calls taken by customer service agents could be analogous to a series of very short, back-to-back meetings. If they are, that has ramifications for the amount of stress customer service representatives have to deal with. This is brought into sharp focus when you consider that the average customer service representative is often expected to be constantly on calls for the duration of an 8-hour shift apart from a 30-minute lunch break and two 15 min breaks, one in the morning and one in the afternoon.

So, is it any wonder that the contact center industry faces perennial burnout and high levels of staff churn?

Suppose we want to build a more sustainable approach to serving our customers, particularly over live channels like the phone or video. If we do, we need to think more clearly and empathetically about our agents and what they go through.

Now, I know that technology is evolving to help with this challenge and that’s great. But we shouldn’t stop there. Building a more attractive and sustainable contact center model will require us to rethink both contact center operations and their economics.

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE