This semester’s first class of the MIT AI Venture Studio course provided some very interesting insights into the current state of artificial intelligence, and students wondered what they will do as they move to the next stage of this rapidly evolving field. We spoke to several industry leaders to find out what should be your priority.
Ramesh Raskar provided some insight into what’s happening with AI (he’s a leader in his class) and talked about the big changes to the model that will make it more powerful than anything we’ve seen so far.
He described a very different use case for large-scale language models as opposed to what you get from generative AI.
But perhaps even more relevant, he talked about the shift from supervised to unsupervised learning, and from screen learning that takes place on devices to 3D learning that takes place closer to the real world. That’s it.
This means that when you incorporate this kind of technology into robotics and autonomous agents that can move autonomously, you create a very different type of AI, and for many people, a more frightening type of AI. Things can get even weirder when you move from supervised learning to reinforcement learning, where there is no clear distinction between the labels and what the program gets from the test and training data.
Rascal also contrasted what he called “sprinkle AI,” which is frivolous tinkering around the edges, with more substantive three-dimensional artificial intelligence, where use cases become clearer. .
From a business perspective, he pointed to three current initiatives in AI: niche applications, platforms, and specific use cases. He returned to the concept of “screen AI,” where technology works on a screen interface, and suggested that without strong in-house technology, some of these applications are little more than window-dressing.
“They are easy to build,” he said of the “Screen AI” products. you. “
For example, he talked about Uber. He talked about how ride-hailing algorithms are at the heart of business and the secret sauce that no one can imitate.
In explaining this type of competitive strategy, Rascal pointed out that there is a lot of money in this space, estimated at $99 trillion over five years.
He said it was important that work was done responsibly, safely and ethically.
So what are these new 3D AI projects?
We moved on to explain 3D use cases and talked about headphones, cameras, and other equipment for first responders with a focus on AR. You can imagine this to be a lot like the ones in the old Terminator movies. Unless, obviously, it is used for good purposes.
Back to Uber and how the new tech economy works. Mr. Raskar spoke about the need to pursue his three stages of AI development: data acquisition, data analysis, and engagement. This was probably meant to get the project out into the world. , work.
Regarding the concept of “data capture,” he distinguished between taxis, traditional systems, and the new disruptive Uber.
The difference, he argued, is that the taxi system collects no data, at least not anything worth saying. Newer taxis have card systems, but traditionally there was no digital component at all, and you paid for your ride in cash according to the meter. Currently, at Uber, everyone’s trip data is mixed together and scrutinized by machines. And machines are getting smarter.
Then we also got some insights from Beth Porter who talked about educational technology and AI for neurodivergence.
“If you know someone who has a child with autism, if you know someone who has a child who suffers from ADHD, you spend millions of frustrating hours trying to get students to engage meaningfully on many tasks. “You know you spend a lot of money,” she said. content experience. ”
She says many are relatively ineffective because they aren’t in the right format or don’t target the needs of neurodiverse students well.
“She explained that she didn’t get the right kind of feedback and didn’t feel like she could connect in any way.
Mr Porter encouraged students to think about the issue holistically and consider what types of learning can help people with disabilities. She pointed out that it doesn’t have to be done through traditional models such as text or voice. Some could come through images and videos, she said. She suggested that AI for neurodivergence could have implications for augmented reality and other types of similar projects.
Hossein Rahnama spoke to us about what new career professionals can do to further their goals and those of their communities.
He suggested working on the heart of the project, not just the interface.
He used the word “co-creation” to say that you need to imagine someone else using your idea to come up with a secondary application.
He also talked about the value of everyday users to technology and contrasted that to how we must move forward with the development of B2B software and AI products.
Whichever the students choose, Rahnama encouraged them to embrace innovation. “Be passionate,” he said, speaking about the value of improving the patient experience in healthcare and other use cases.
After Rahnama, Sandy Pentland (who founded Classification over 20 years ago) appeared to talk about perspective-aware computing and other new advances.
“Don’t think small,” he said, encouraging students to “build something that touches a billion people.”
Regarding opportunities, he talked about reducing silos in the healthcare sector.
“You have to be able to tie (things) together,” he said. “Besides, he needs an AI.”
He cited the pandemic as a prime example, noting that the response could have been stronger with better data handling.
“We didn’t share that data directly. We could have done a dramatically better job,” he said.
He also talked about microbiome and RNA analysis.
Finally, we received some interesting input from Dave Blundin. He talked about the big changes that will occur in just a few years.
Brandin began talking about joining Lincoln Laboratory. This relates to a conversation I had with Ivan Sutherland in another post. We also talked about how he came to head to MIT as an avid fan of Marvin Minsky.
Brandin cited the issues of inequality he saw growing up in Iran and some of the paths to agile technology, citing the example of Amazon, which replaced Walmart but started as a small startup.
He also talked about how to measure light-speed advances in AI.
“What part of your life did you spend talking to an AI in the last year?” He suggested that students should count things like their interactions with Siri, and predicted that the metric would increase each year.
“We receive thousands of customer service calls every day from (one of his companies, a company he has taken public),” he said. “We record them all. Of course, those are the things we’re testing… They’re going to move (to AI) very soon.”
Blundin also has some interesting thoughts when it comes to writing code.
“At OpenAi, 80%[of the code is now]written by machines,” he said, citing a recent conversation with Sam Altman, and expects that number to rise to 95% within just a year or two. He suggested that there was a consensus.
All of this was quite an eye-opener. Stay tuned for more insights into 2024.
follow me LinkedIn. check out my website.