Connect with us

Startups

Ambient’s computer vision detects dangerous behaviors, but raises privacy concerns

Published

on

Computer vision, a technology that uses algorithms to “see” and evaluate objects, people, and events in the real world, is a rapidly expanding market within the broader AI sector. That’s because the applications are practically limitless, ranging from agricultural crop monitoring to medical diagnostics and driverless vehicle testing. Allied Market Research anticipates that computer vision product vendors will be worth a combined $144.46 billion by 2028.

Digital transformation in the enterprise, spurred by the pandemic, has further accelerated the growth. For example, Zebra Medical Vision, a computer vision startup focused on health care, was acquired in August 2021 in a deal worth $200 million. Meanwhile, Landing AI has secured tens of millions for its visual inspection dashboards that enable engineers to train, test, and deploy computer vision to devices such as laptops.

Another rising class of startups — one focused on analyzing camera and sensor footage — is attracting significant investment from institutional backers. Ambient is among the newest arrivals — its computer vision software attempts to detect potentially dangerous situations to alert stakeholder. Launched in 2017, the Palo Alto, California-based company is emerging from stealth with $52 million in venture capital led by Andreessen Horowitz with participation from Y Combinator, Stanford, and others including Okta cofounder Frederic Kerrest, CrowdStrike CEO George Kurtz, and Microsoft CVP Charles Dietrich.

Computer vision for security

Ambient was cofounded by CEO Shikhar Shrestha, who was previously at Google helping the Project Tango team. Vikesh Khanna, the company’s CTO and other cofounder, worked at Dropbox building data analytics systems.

Ambient grew out of research Shrestha and Khanna did while at Stanford. Powered by what Shrestha calls a “context graph,” the platform plugs into CCTV and sensor systems and assesses risk factors when looking at real-time or historical recordings — namely different location contexts (e.g., the type of space and time of day), behaviors (the movement of an object and object interactions), and objects (people, vehicles, animals, and more).

“I founded Ambientin January 2017 alongside Khanna. However, the inspiration for Ambient came many years before,” Shrestha told VentureBeat via email. “At 12 years old, I was robbed at gunpoint in a location that was monitored by a security camera. At the time, I was expecting a patrol officer to intervene, which never happened. From that experience, I learned that despite the ubiquity of security cameras in our world, few recordings of incidents lead to real-time response. It made me fascinated with security technology, tinkering with, designing, and building alarm and surveillance systems.”

Ambient

Above: Ambient’s monitoring dashboard.

Image Credit: Ambient

Shrestha asserts that Ambient’s algorithms can identify threats like perimeter breaches and “tailgating” without facial recognition or profiling, as well as learn new behaviors and threats automatically over time. The platform captions videos’ contents ranging from context about what’s taking place to individual actions, like saying “this is a busy street” or “there is a man walking.”

“The four key components of the Ambient platform are video data processing; the detection of objects, events, and context; threat signature evaluation; and prioritization for human intervention,” Shrestha said. “Ambient provides hundreds of threat signatures that customers can deploy out-of-the-box and we’re rapidly adding new threat signatures based on customer requests from the field. Today, we deliver … over 100 threat signatures [and our funding] will enable us to build on that foundational library to quickly double the number of threat signatures that we deliver in the next year.”

Ambient says it has processed over 20,000 hours of video from its customers, which it claims include five of the top 10 U.S. tech brands by market capitalization as well as “a number of” Fortune 500 companies.

“Our customers currently span a wide variety of industry verticals including education, finance, manufacturing, media and entertainment, retail, real-estate and residential security, and technology,” Shrestha added. “We intend to expand our penetration of the enterprise market into a wide range of industries and types of buildings, from corporate campuses to datacenters, schools, and museums.”

Potential challenges

Like most computer vision systems, Ambient’s are trained on a combination of open source datasets and in-house generated images and videos showing examples of people, places, and things. The company claims that it takes steps to ensure that the dataset is sufficiently diverse, but history has shown that bias can creep into even the best-designed models.

For example, previous research has found that large, publicly available image datasets are U.S.- and Euro-centric, encoding humanlike biases about race, ethnicity, gender, weight, and more. Flaws can arise from other sources, like differences in sun paths between the northern and southern hemispheres and variations in background scenery. Studies show that particular camera models can cause an algorithm to be less effective in classifying objects that it was trained to detect. Even architectural design choices in algorithms can contribute to biased classifications.

These biases can lead to real-world harm. ST Technologies’ facial recognition and weapon-detecting platform was found to misidentify Black children at a higher rate and frequently mistook broom handles for guns. Meanwhile, Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny over its reportedly poor detection rates. Facial recognition software used by the Detroit police falsely identified a Black man as a shoplifter. And Google’s Cloud Vision API at one time labeled thermometers held by Black people as “guns” while labeling thermometers held by light-skinned subjects as “electronic devices.”

Ambient

“This technology, which tends to involve object and behavior recognition, is far from accurate,” Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation, told Fast Company in a recent interview about gun-detecting AI technologies.

Ambient says that the data it uses to train its video processing algorithm is annotated using crowdsourcing services before being fed into the system. But labels, the annotations from which many computer vision models learn relationships in data, also bear the hallmarks of data imbalance. Annotators bring their own biases and shortcomings to the table, which can translate to imperfect annotations. For example, some labelers for MIT’s and NYU’s 80 Million Tiny Images dataset contributed racist, sexist, and otherwise offensive annotations, including nearly 2,000 images labeled with the N-word and labels such as “rape suspect” and “child molester.”

In 2019, Wired reported on the susceptibility of platforms like Amazon Mechanical Turk — where many researchers and companies recruit annotators — to automated bots. Even when the crowdworkers are verifiably human, they’re motivated by pay rather than interest, which can result in low-quality data — particularly when they’re treated poorly and paid a below-market rate. Being human, annotators naturally also make mistakes — sometimes major ones. In an MIT analysis of popular benchmarks including ImageNet, the researchers found mislabeled images, like one breed of dog being confused for another.

Shrestha claims that Ambient’s technology minimizes bias by taking a “system training” approach to computer vision. “System-level blocks” control which task an individual computer vision model is focused on and optimize the model for that narrow task, he says, so that a single model isn’t making the final decision.

“[W]e’re breaking the problem down to system-level blocks which have very tightly described inferences. For example, [one] human interaction block can detect one of these 10 interactions, [while] this scene element block can detect one of these 20 scene elements,” Shrestha added. “This architecture means that we are not asking data labelers to label based on unstructured assumptions. In our architecture, models have structured outputs associated with specific tasks. Examples would be: detect a person, a car, the color of a shirt, an interaction between people and a car. These structured outputs constrain the labeler appropriately so that they can not respond with an arbitrary label and bias the model.”

Data privacy and surveillance

Anticipating that some customers might be wary of granting a vendor like Ambient access to CCTV footage, the company attempts to allay concerns in its terms of service agreement. Ambient reserves the right to use only “aggregated, de-identified data” from customers to improve, test, and market its services and claims that it doesn’t use any sensitive customer data uploaded to its platform for these purposes.

Ambient

“Our product has been architected from day one for data minimization. Essentially, this means that we eliminate personally identifiable information from our data collection efforts,” Shrestha said. “Raw video data is not processed by Ambient computer vision algorithms. Instead, the algorithms only process raw footage metadata [and not] facial attributes, gender attributes, or identifiers of race. This comes with significant constraints. For example, we will not offer facial recognition analysis as part of our solution because it is impossible to deliver facial recognition without collecting and processing.”

Ambient doesn’t make it clear in its terms of service under what circumstances it’ll release customer data, such as when requested by law enforcement or served a subpoena. The company also doesn’t say how long it retains data — only that the data “may be irretrievably deleted” if a customer’s account is terminated.

“We are committed to working with our customers to ensure that their use of the product is consistent with the requirements of applicable privacy and data protection laws,” Shrestha said. “We have strong technical controls in the product that limit both what the product can do and who has access to the product, [and] we’re committed to putting appropriate technical constraints in place in the interest of preventing potential harm.”

Ambient

It’s not just users that might be concerned about Ambient’s AI-powered technology. Privacy advocates worry that systems like it — including from Umbo, Deep Sentinel, and other vendors — could be coopted for less humanitarian intents, potentially normalizing greater levels of surveillance.

In the U.S., each state has its own surveillance laws, but most give wide discretion to employers so long as the equipment they use to track employees is visible or disclosed in writing. There’s also no federal legislation that explicitly prohibits companies from video recording staff during the workday.

“Some of these techniques can be helpful but there are huge privacy issues when systems are designed to capture identity and make a determination based on personal data,” Marc Rotenberg, president of the Electronic Privacy Information Center, told Phys.org in an interview. “That’s where issues of secret profiling, bias and accuracy enter the picture.”

Source link

Startups

LastPass hacked, OpenAI opens access to ChatGPT, and Kanye gets suspended from Twitter (again) • TechCrunch

Published

on

Aaaaand we’re back! With our Thanksgiving mini-hiatus behind us, it’s time for another edition of Week in Review — the newsletter where we quickly wrap up the most read TechCrunch stories from the past seven(ish) days. No matter how busy you are, it should give you a pretty good idea of what people were talking about in tech this week.

Want it in your inbox every Saturday morning? Sign up here.

most read

Instafest goes instaviral: You’ve probably been to a great music festival before. But have you been to one made just for you? Probably not. Instafest, a web app that went super viral this week, helps you daydream about what that festival might look like. Sign in with your Spotify credentials and it’ll generate a promo poster for a pretend festival based on your listening habits.

LastPass breached (again): “Password manager LastPass said it’s investigating a security incident after its systems were compromised for the second time this year,” writes Zack Whittaker. Investigations are still underway, which unfortunately means it’s not super clear what (and whose) data might’ve been accessed.

ChatGPT opens up: This week, OpenAI widely opened up access to ChatGPT, which lets you interact with their new language-generation AI through a simple chat-style interface. In other words, it lets you generate (sometimes scarily well-written) passages of text by chatting with a robot. Darrell used it to instantly write the Pokémon cheat sheet he’s always wanted.

AWS re:Invents: This week, Amazon Web Services hosted its annual re:Invent conference, where the company shows off what’s next for the cloud computing platform that powers a massive chunk of the internet. This year’s highlights? A low-code tool for serverless apps, a pledge to give AWS customers control over where in the world their data is stored (to help navigate increasingly complicated government policies), and a tool to run “city-sized simulations” in the cloud.

Twitter suspends Kanye (again): “Elon Musk has suspended Kanye West’s (aka Ye) Twitter account after the latter posted antisemitic tweets and violated the platform’s rules,” writes Ivan Mehta.

Spotify Wraps it up: Each year in December, Spotify ships “Wrapped” — an interactive feature that takes your Spotify listening data for the year and presents it in a super visual way. This year it’s got the straightforward stuff like how many minutes you streamed, but it’s also branching out with ideas like “listening personalities” — a Myers-Briggs-inspired system that puts each user into one of 16 camps, like “the Adventurer” or “the Replayer.”

DoorDash layoffs: I was hoping to go a week without a layoffs story cracking the list. Alas, DoorDash confirmed this week that it’s laying off 1,250 people, with CEO Tony Xu explaining that they hired too quickly during the pandemic.

Salesforce co-CEO steps down: “In one week last December, [Bret Taylor] was named board chair at Twitter and co-CEO at Salesforce,” writes Ron Miller. “One year later, he doesn’t have either job.” Taylor says he has “decided to return to [his] entrepreneurial roots.”

audio roundup

I expected things to be a little quiet in TC Podcast land last week because of the holiday, but we somehow still had great shows! Ron Miller and Rita Liao joined Darrell Etherington on The TechCrunch Podcast to talk about the departure of Salesforce’s co-CEO and China’s “great wall of porn”; Team Chain Reaction shared an interview with Nikil Viswanathan, CEO of web3 development platform Alchemy; and the ever-lovely Equity crew talked about everything from Sam Bankman-Fried’s wild interview at DealBook to why all three of the co-founders at financing startup Pipe stepped down simultaneously.

TechCrunch+

What lies behind the TC+ members-only paywall? Here’s what TC+ members were reading most this week:

Lessons for raising $10M without giving up a board seat: Reclaim.ai has raised $10 million over the last two years, all “without giving up a single board seat.” How? Reclaim.ai co-founder Henry Shapiro shares his insights.

Consultants are the new nontraditional VC: “Why are so many consultant-led venture capital funds launching now?” asks Rebecca Szkutak.

Fundraising in times of greater VC scrutiny: “Founders may be discouraged in this environment, but they need to remember that they have ‘currency,’ too,” writes DocSend co-founder and former CEO Russ Heddleston.

Source link

Continue Reading

Startups

Building global, scalable metaverse applications

Published

on

Previously we talked about the trillion-dollar infrastructure opportunity that comes with building the metaverse — and it is indeed very large. But what about the applications that will run on top of this new infrastructure?

Metaverse applications will be very different from the traditional web or mobile apps that we are used to today. For one, they will be much more immersive and interactive, blurring the lines between the virtual and physical worlds. And because of the distributed nature of the metaverse, they will also need to be able to scale globally — something that has never been done before at this level.

In this article, we will take a developer’s perspective and explore what it will take to build global, scalable metaverse applications.

As you are aware, the metaverse will work very differently from the web or mobile apps we have today. For one, it is distributed, meaning there is no central server that controls everything. This has a number of implications for developers:

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

  • They will need to be able to deal with data that is spread out across many different servers (or “nodes”) in a decentralized manner.
  • They will need to be able to deal with users that are also spread out across many different servers.
  • They will need to be able to deal with the fact that each user may have a different experience of the metaverse, based on their location and the devices they are using due to the fact not everyone has the same tech setup, and this plays a pivotal role in how the metaverse is experienced by each user.

These challenges are not insurmountable, but they do require a different way of thinking about application development. Let’s take a closer look at each one.

Data control and manipulation

In a traditional web or mobile app, all the data is stored on a central server. This makes it easy for developers to query and manipulate that data because everything is in one place.

In a distributed metaverse, however, data is spread out across many different servers. This means that developers will need to find new ways to query and manipulate data that is not centrally located.

One way to do this is through the blockchain itself. This distributed ledger, as you know, is spread out across many different servers and allows developers to query and manipulate data in a decentralized manner.

Another way to deal with the challenge of data is through what is known as “content delivery networks” (CDNs). These are networks of servers that are designed to deliver content to users in a fast and efficient manner.

CDNs are often used to deliver web content, but they can also be used to deliver metaverse content. This is because CDNs are designed to deal with large amounts of data that need to be delivered quickly and efficiently — something that is essential for metaverse applications.

Users and devices

Another challenge that developers will need to face is the fact that users and devices are also spread out across many different servers. This means that developers will need to find ways to deliver content to users in a way that is efficient and effective.

One way to do this is through the use of “mirrors.” Mirrors are copies of the content that are stored on different servers. When a user requests content, they are redirected to the nearest mirror, which helps to improve performance and reduce latency.

When a user’s device is not able to connect to the server that is hosting the content, another way to deliver content is through “proxies.” Proxies are servers that act on behalf of the user’s device and fetch the content from the server that is hosting it.

This can be done in a number of ways, but one common way is through the use of a “reverse proxy.” In this case, the proxy server is located between the user’s device and the server that is hosting the content. The proxy fetches the content from the server and then delivers it to the user’s device.

Location and devices

As we mentioned before, each user’s experience of the metaverse will be different based on their location and the devices they are using. This is because not everyone has the same tech setup, and this plays a pivotal role in how the metaverse is experienced by each user.

For example, someone who is using a virtual reality headset will have a completely different experience than someone who is just using a desktop computer. And someone who is located in Europe will have a different experience than someone who is located in Asia.

Though it may not be obvious why geographical location would play a part in something that is meant to be boundless, think of it this way. The internet is a physical infrastructure that is spread out across the world. And although the metaverse is not bound by the same physical limitations, it still relies on this infrastructure to function.

This means that developers will need to take into account the different geographical locations of their users and devices and design their applications accordingly. They will need to be able to deliver content quickly and efficiently to users all over the world, regardless of their location.

Different geographical locations also have different laws and regulations. This is something that developers will need to be aware of when designing applications for the metaverse. They will need to make sure that their applications are compliant with all applicable laws and regulations.

Application development

Now that we’ve looked at some of the challenges that developers will need to face, let’s take a look at how they can develop metaverse applications. Since the metaverse is virtual, the type of development that is required is different from traditional application development.

The first thing that developers will need to do is to create a “space”. A space is a virtual environment that is used to host applications.

Spaces are created using a variety of different tools, but the most popular tool currently is Unity, a game engine used to create 3D environments.

Once a space has been created, developers will need to populate it with content. This content can be anything from 3D models to audio files.

The next step is to publish the space. This means that the space will be made available to other users, who will be able to access the space through a variety of different devices, including desktop computers, laptops, tablets, and smartphones.

Finally, developers will need to promote their space. This means that they will need to market their space to users.

Getting applications to scale

Since web 3.0 is decentralized, scalability is usually the biggest challenge because traditional servers are almost impossible to use. IPFS is one solution that can help with this problem.

IPFS is a distributed file system used to store and share files. IPFS is similar to BitTorrent, but it is designed to be used for file storage rather than file sharing.

IPFS is a peer-to-peer system, which means that there is no central server. This makes IPFS very scalable because there is no single point of failure.

To use IPFS, developers will need to install it on their computer and add their space to the network. Then, other users will be able to access it.

The bottom line on building global, scalable metaverse applications

To finish off, the technology to build scalable metaverse applications already exists; but a lot of creativity is still required to make it all work together in a user-friendly way. The key is to keep the following concepts in mind:

  • The metaverse is global and decentralized
  • Users will access the metaverse through a variety of devices
  • Location and device management are important
  • Application development is different from traditional development
  • Scalability is a challenge, but IPFS can help

Clearly, we can’t have an article series about building the metaverse without discussing NFTs. In fact, these might be the key to making a global, decentralized, metaverse work. In our next article, we will explore how NFTs can be used in the metaverse.

By keeping these concepts in mind, developers will be able to create metaverse applications that are both user-friendly and scalable.

Daniel Saito is CEO and cofounder of StrongNode

Source link

Continue Reading

Startups

Intel researchers see a path to trillion-transistor chips by 2030

Published

on

Intel announced that its researchers foresee a way to make chips 10 times more dense through packaging improvements and a layer of a material that is just three atoms thick. And that could pave the way to putting a trillion transistors on a chip package by 2030.

Moore’s Law is supposed to be dead. Chips aren’t supposed to get much better, at least not through traditional manufacturing advances. That’s a dismal notion on the 75th anniversary of the invention of the transistor. Back in 1965, Intel chairman emeritus Gordon Moore predicted the number of components, or transistors, on a chip would double every couple of years.

That law held up for decades. Chips got faster and more efficient. Chip makers shrank the dimensions of chips, and goodness resulted. The electrons in a miniaturized chip had shorter distances to travel. So the chip got faster. And the shorter distances meant the chip used less material, making it cheaper. And so Moore’s Law’s steady march meant that chips could get faster, cheaper, and even more power efficient at the same time.

But Moore’s Law really depended on brilliant human engineers coming up with better chip designs and continuous manufacturing miniaturization. During recent years, it got harder to make those advances. The chip design ran into the laws of physics. With atomic layers a few atoms thick, it wasn’t possible to shrink anymore. And so Nvidia CEO Jensen Huang recently said, “Moore’s Law is dead.”

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

Intel showed how it could build chips with complex interconnected packages.

That’s not good timing, since we’re just about to start building the metaverse. Moore’s Law is vital to addressing the world’s insatiable computing needs as surging data consumption and the drive toward increased artificial intelligence (AI) brings about the greatest acceleration in demand ever.

A week after Nvidia’s CEO said that, Intel CEO Pat Gelsinger said that Moore’s Law is alive and well. That’s no surprise since he has bet tens of billions of dollars on new chip manufacturing plants in the U.S. Still, his researchers are backing him up at the International Electron Devices Meeting. Intel made it clear that these advances are may five to ten years out.

In papers at the research event, Intel described breakthroughs for keeping Moore’s Law on track to a trillion transistors on a package in the next decade. At IEDM, Intel researchers are showcasing advances in 3D packaging technology with a new 10 times improvement in density, said Paul Fischer, director and senior principal engineer in components research at Intel, said in a press briefing.

“Our mission is to keep our options for process technology rich and full,” he said.

These packages have been used in innovative ways lately; Intel rival Advanced Micro Devices announced that its latest graphics chip has a processor chip and six memory chips — all connected together in a single package. Intel said it collaborates with government entities, universities, industry researchers, and chip equipment companies. Intel shares the fruits of the research at places like the IEDM event.

Intel also unveiled novel materials for 2D transistor scaling beyond RibbonFET, including super-thin materials just three atoms thick. It also described new possibilities in energy efficiency and memory for higher-performing computing; and advancements for quantum computing.

“Seventy-five years since the invention of the transistor, innovation driving Moore’s Law continues to address the world’s exponentially increasing demand for computing,” said Gary Patton, Intel vice president of components research and design enablement, in a statement. “At IEDM 2022, Intel is showcasing both the forward-thinking and concrete research advancements needed to break through current and future barriers, deliver to this insatiable demand, and keep Moore’s Law alive and well for years to come.”

The transistor’s 75th birthday

The layers between chip circuits can be as little as three atoms thick.

Commemorating the 75th anniversary of the transistor, Ann Kelleher, Intel executive vice president and general manager of technology development, will lead a plenary session at IEDM. Kelleher will outline the paths forward for continued industry innovation – rallying the ecosystem around a systems-based strategy to address the world’s increasing demand for computing and more effectively innovate to advance at a Moore’s Law pace.

The session, “Celebrating 75 Years of the Transistor! A Look at the Evolution of Moore’s Law Innovation,” takes place at 9:45 a.m. PST on December 5.

To make advances required, Intel has a multi-pronged approach of “growing signficance and certainly a growing influence within Intel” to look across multiple disciplines.
Intel has to move forward in chip materials, chip-making equipment, design, and packaging, Fischer said.

“3D packaging technology is enabling the seamless integration of chiplets,” or multiple chips in a package, he said. “We’re blurring the line between where silicon ends and packaging begins.”

Continuous innovation is the cornerstone of Moore’s Law. Many of the key innovation milestones for continued power, performance and cost improvements over the past two decades – including strained silicon, Hi-K metal gate and FinFET – in personal computers, graphics processors and data centers started with Intel’s Components Research Group.

Further research, including RibbonFET gate-all-around (GAA) transistors, PowerVia back side power delivery technology and packaging breakthroughs like EMIB and Foveros Direct, are on the roadmap today.

At IEDM 2022, Intel’s Components Research Group said it is developing new 3D hybrid bonding packaging technology to enable seamless integration of chiplets; super-thin, 2D materials to fit more
transistors onto a single chip; and new possibilities in energy efficiency and memory for higher-performing computing.

How Intel will do it

Intel foresees voracious demand for computing power.

Researchers have identified new materials and processes that blur the line between packaging and silicon. Intel said it foresees moving from tens of billions of transistors on a chip today to a trillion transistors on a package, which can have a lot of chips on it.

One way to make the advances is through packaging that can achieve an additional 10 times interconnect density, leading to quasi-monolithic chips. Intel’s materials innovations have also identified practical design choices that can meet the requirements of transistor scaling using a novel material just three atoms thick, enabling the company to continue scaling beyond RibbonFET.

Intel’s latest hybrid bonding research presented at IEDM 2022 shows an additional 10 times improvement in density for power and performance over Intel’s IEDM 2021 research presentation.

Continued hybrid bonding scaling to a three-nanometer pitch achieves similar interconnect densities and bandwidths as those found on monolithic system-on-chip connections. A nanometer is a billionth of a meter.

Intel said it is looking to super-thin ‘2D’ materials to fit more transistors onto a single chip. Intel demonstrated a gate-all-around stacked nanosheet structure using a thin 2D channel just three atoms thick, while achieving near-ideal switching of transistors on a double-gate structure at room temperature with low leakage current.

These are two key breakthroughs needed for stacking GAA transistors and moving beyond the fundamental limits of silicon.

Researchers also revealed the first comprehensive analysis of electrical contact topologies to 2D materials that could further pave the way for high-performing and scalable transistor channels.

To use chip area more effectively, Intel redefines scaling by developing memory that can be placed vertically above transistors. In an industry first, Intel demonstrates stacked ferroelectric capacitors that match the performance of conventional ferroelectric trench capacitors and can be used to build FeRAM on a logic die.

An industry-first device-level model captures mixed phases and defects for improved ferroelectric hafnia devices, marking significant progress for Intel in supporting industry tools to develop novel memories and ferroelectric transistors.

Intel sees a path to trillion-transistor chips with several approaches.

Bringing the world one step closer to transitioning beyond 5G and solving the challenges of power efficiency, Intel is building a viable path to 300 millimeter GaN-on-silicon wafers. Intel breakthroughs in this area demonstrate a 20 times gain over industry standard GaN and sets an industry record figure-of-merit for high performance power delivery.

Intel is making breakthroughs on super-energy-efficient technologies, specifically transistors that don’t forget, retaining data even when the power is off. Already, Intel researchers have broken two of three barriers keeping the technology from being fully viable and operational at room temperature.

Intel continues to introduce new concepts in physics with breakthroughs in delivering better qubits for quantum computing. Intel researchers work to find better ways to store quantum information by gathering a better understanding of various interface defects that could act as environmental disturbances affecting quantum data.

Source link

Continue Reading

Trending

URGENT: CYBER SECURITY UPDATE