News

Nvidia and Meta CEOs Talk Generative AI at Siggraph 2024 – Video

Mark. Welcome to your first C chart. How do you see the uh uh the advancement of generative AI AD meta today? And how do you apply it to enhance your business or introduce new capabilities that you offer with? Generative AI? Uh I think we’re quickly moving into a zone where it’s not just the majority of the content, you know, that you see on Instagram today, you know, just recommended to you of the kinds of things that are out there in the world that fit your interests whether you follow the people or not. I think in the future a lot of that stuff is going to be made with these tools as well. Some of it is going to be creators using the tools to create new content. Some of it is going to end up being content that is either being made on the fly for you or, or, or some kind of amalgamation and synthesis of different things that are out there. So I kind of dream that one day, as you can almost imagine, all of Facebook or Instagram, is a single AI model that has unified all of these different types of content and systems that actually have different goals over different time frames, right? Because part of it is just showing you, you know, what’s the interesting content that you’re going to be, that you want to see today. But part of it is helping you build your network long term, right? People that you might know or accounts that you might want to follow and these, these multimodal models are generally much better at recognizing patterns, weak signals and things like that. And so one of the things that people, people always, you know, it’s so interesting that AI is so deeply ingrained in your business, you’ve built GP U infrastructure that’s been running these big recommendation systems for a long time. Now you’re, now you’ve been a little slow to actually get to GP US, almost to try to be nice. I know. Well, tell everyone about the creator AI and AI studio that’s going to allow you to do that. Yeah. So, so we actually, I mean, this is something that we, we’re, you know, we’ve talked about it a little bit, but we’re rolling it out much more broadly today. Um, you know, a big part of our vision is that I don’t think there’s going to be just one AI model, right? I mean, this is something that some other companies in the industry, they’re like, you know, it’s like they’re building one central agent and, and yeah, I will, we’ll have the AI ​​assistant that you can use. But a big part of our vision is that we want to enable all the people who use our products to basically create agents for themselves. So whether that’s, you know, all of the many, many millions of creators that are on the platform or, you know, hundreds of millions of small businesses, ultimately we just want to be able to pull in all of your content and very quickly set up an enterprise agent and be able to communicate with your customers and, you know, do sales and customer support and things like that. So the one that we’re just starting to roll out more now is um we call it AI studio and it’s basically um a set of tools that will eventually make, it said that any creator can build sort of an AI version of themselves um as, as some sort of agent or assistant that their community can interact with. There’s sort of a fundamental problem here where there’s just not enough hours in the day, right? It’s like, if you’re a creator, you want to engage with your community more. Um But you, you, you’re limited in time and similarly, your community wants to engage with you. Uh But it’s hard. I mean, there’s, there’s just, there’s a limited time to do that. So the next best thing is, is to let people basically create these artifacts, right? It’s um, it’s kind of, it’s an agent, but it’s, you train it to sort of, on your material um, to represent you in the way that you want. I think it’s, it’s, it’s a very kind of creative endeavor, almost like a, like a piece of, of art or content that you’re putting out there. And no, it’s, it’s very clear that it doesn’t engage with the creator themselves. But I think it’s going to be another interesting way, just like how creators are putting content out there on, on these uh social systems to be able to have agents that do that. One of the interesting use cases that we’re seeing is people using these agents for support. Um, this was something that, that was a little bit surprising to me, one of the top use cases for meta AI is that people are basically using it to reenact difficult social situations that they’re in. So whether it’s a professional situation, it’s like, okay, I want to ask my manager, how do I get a promotion or a raise or I’m having a fight with my boyfriend or I’m having a difficult situation with my girlfriend. Like how, like how can this conversation flow and basically have a, like a completely judgment-free zone where you can basically role play and see how, how, how the conversation is going to flow and get feedback on it. Um, but a lot of people don’t just want to deal with the same kind of, you know, agent, whether it’s with AI or Chai PT or whatever everyone else is using, they want to kind of create their own thing. So the LMA is really important. We’ve built this concept of what we call an AI factory, uh AI foundry around it uh so that we can help everybody build, take, you know, a lot of people, they, they, they have a desire to build uh uh AI and it’s really important for them to own the AI ​​because once they put that into their, their flywheel, their data flywheel, that’s how the institutional knowledge of their business gets encoded and embedded into a, an AI. So they can’t afford to have the AI ​​flywheel, the data flywheel that experience flywheel somewhere else. So, and, and so open source allows them to do that, but they, they don’t really know how to turn this whole thing into an AI. And so we created this thing called an AI foundry. We provide the tooling, we provide the expertise uh llama uh technology. Uh we have the ability to uh help them turn this whole thing uh into an AI service. And, and then, once we’re done with that, uh they take it, they own it, we the output of it is what we call A Nim and this NI this, this Neuro Micro NVIDIA inference, microservice, uh they just download it, they take it and they run it anywhere. They love to run it on Prem. And we have a whole ecosystem of partners uh from O Ems that can run the Nims to uh G SI S like Accenture that uh we’ve trained and we’re working with to create llama based Nims and, and uh and uh pipelines and, and now we’re, we’re helping companies all over the world do this. I mean it’s a really exciting thing. It’s really all triggered by uh the llama open sourcing. The Rayan Met Alas um your vision for, for bringing AI into the virtual world uh is really interesting. Tell us about that. Yeah. Well, okay, there’s a lot to unpack. Um The segment anything model that you’re talking about, we’re actually showcasing. I think the next version of this here at, at, at Siggraph segment anything two. And it’s, it works now, it’s faster, it works with. Um Oh, here we go. Um It also works in video now. I think this is actually cattle from my ranch in Kawaii. Um By the way, these are what they call delicious brands. There you have it. Next time we’ll do that. So, Mark, Mark came over to my house and we made Philly Cheesesteak together. Next time you bring what you did. I was more of a sous chef, the fun effects that can be done with this and because it’s also going to open up a lot of more serious applications in industry. So, I mean, scientists are using this stuff to, you know, study coral reefs and natural habitats and, um, and sort of the evolution of landscapes and things like that. But I mean, it’s uh being able to do this in video and have it be a zero shot and being able to interact with it and tell it what you want to track. Um uh It’s, it’s uh it’s pretty cool research. I think you’re going to end up with a whole bunch of different potential eyewear products at different price points with different levels of technology in them. So I think uh based on what we’re seeing now with the Rayban Meadows, I, I would guess that those display list AI glasses that are priced at around $300 are going to be a really big product that will eventually have tens of millions or hundreds of millions of people. Um So you’re going to have super interactive AI where you’re talking to visually, you’re going to have visual language, understanding that you just demonstrated that you have real-time translation, you could talk to me in one language. I’m hearing it in another language. And then of course the display is going to be great as well, but it’s going to make the glasses a little bit heavier and it’s going to make them more expensive. So I think there’s going to be a lot of people who want the kind of fully holographic display, but there’s also going to be a lot of people who um, you know, they, they want something that’s ultimately going to be really thin lenses. And so you guys know when, when, when Zuck calls it H 100, his H 100 data center. There’s like, I think you’re heading for 600,000 and, and they’re, we’re good customers. This is how you get the Jensen Q and A with zig graph. Yes, it is uh ladies and gentlemen, Mark Zuckerberg.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
situs toto toto 4d rupiahtoto toto slot