Get ready for the next generation of AI

Researchers from Google also submitted a paper to the conference about their new model called DreamFusion, which generates 3D images based on text prompts. The 3D models can be viewed from any angle, the lighting can be changed, and the model can be plonked into any 3D environment. 

Don’t expect that you’ll get to play with these models anytime soon. Meta isn’t releasing Make-A-Video to the public yet. That’s a good thing. Meta’s model is trained using the same open-source image-data set that was behind Stable Diffusion. The company says it filtered out toxic language and NSFW images, but that’s no guarantee that they will have caught all the nuances of human unpleasantness when data sets consist of millions and millions of samples. And the company doesn’t exactly have a stellar track record when it comes to curbing the harm caused by the systems it builds, to put it lightly. 

The creators of Pheraki write in their paper that while the videos their model produces are not yet indistinguishable in quality from real ones, it “is within the realm of possibility, even today.” The models’ creators say that  before releasing their model, they want to get a better understanding of data, prompts, and filtering outputs and measure biases in order to mitigate harms. 

It’s only going to become harder and harder to know what’s real online, and video AI opens up a slew of unique dangers that audio and images don’t, such as the prospect of turbo-charged deepfakes. Platforms like TikTok and Instagram are already warping our sense of reality through augmented facial filters. AI-generated video could be a powerful tool for misinformation, because people have a greater tendency to believe and share fake videos than fake audio and text versions of the same content, according to researchers at Penn State University. 

In conclusion, we haven’t come even close to figuring out what to do about the toxic elements of language models. We’ve only just started examining the harms around text-to-image AI systems. Video? Good luck with that. 

Deeper Learning

The EU wants to put companies on the hook for harmful AI

The EU is creating new rules to make it easier to sue AI companies for harm. A new bill published last week, which is likely to become law in a couple of years, is part of a push from Europe to force AI developers not to release dangerous systems.

The bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become law around a similar time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people. This could include AI systems used for policing, recruitment, or health care. 

Source

Leave a Reply

Your email address will not be published. Required fields are marked *