We don’t know a lot about this video, except that it appears to be the work of a company in Tel Aviv, Israel called Canny AI, which specializes in advanced video post processing technology. They can do video dialog replacement, lip-sync dubbed dialog in any language, reprocess existing footage for new dialog, and probably a lot more.
The song Imagine by John Lennon was released as a single in 1971, and was the title cut from his album by the same name. The message in it was as essential then as it is now.
Watch as the leaders of the free world – and the leaders of the not-so-free world – join in song to deliver a message of peace, love, and hope.
If only this were real.
Use the radio player control at the right to pause the Krypton Radio stream while you watch it.
It isn’t real. While it looks amazing, it is shockingly and sadly impossible. We, the people of Earth, look to our leaders for hope, guidance, and truth, and right now we’re not getting much of any of those things from any quarter, including those from whom we expect it.
Canny AI’s technology shows just how far the Deepfake technology has come, and how difficult it would be to detect a misinformation campaign launched by a hostile power, either foreign or domestic. It’s not going away, and while it could be a tremendously useful tool for the entertainment industry, it’s also potentially one of the biggest technological threats to truth and the common welfare there is.
VDR, or Video Dialog Replacement, need not be a weapon of disinformation. Canny AI produced the video hoping to illustrate the point.
“There’s a lot of hype on that, around the fake news with this technology and we wanted to do something with a strong unifying message, to show some positive uses for this technology.”– Canny AI Co-founder Omer Ben-Ami
The process has to do with something called Deep Learning, and it’s starting to become a major factor in visual effects. They’re still very new, but they’re going to change the visual effects industry dramatically, and in very short order. They can produce new footage based on feeding the algorithms dozens of hours of sample footage and letting them figure things out for themselves according to some complex and elegant mathematical formulae describing the weighting, motion and other attributes of the features of the videos they analyze.
What can we do to protect ourselves from the deleterious effects of the misuse of this technology? So far, there’s no easy answer.
For a brief moment, though, we can dream of a better world.