2024-1 Trendy Media: Platforms, Apps and OTT Streamers Final Paper (Professor Keith Wagner)

AI Films in June 2024

                                                                        MFA Candidate Cinema Major mfer#7653 

                                         (Yonsei University Grad School of Communication and Arts)

1. Introduction

Since the release of demo version of Open AI’s AI video (’video generated by AI’ is accurate discription but for simplicity, I will use the term ‘AI video’) model ‘Sora’, interest of ‘AI Film’ (film generated by AI is accurate but for simplicity, referred to as ‘AI film’) is increasing in the United States. As AI industry is extremely focused on US (LLama from Meta, Claude from Anthropic, Chatgpt from OpenAI, also AI services from Google_Alphabet_, Apple and Amazon are all corporates in US), discourse about AI video is also based on US. Most of the AI video models are also based on US. As Sora isn’t public yet(in June 2024), most of AI videos are from Stable Diffusion (Stability AI), RunwayML, PikaLabs, Haiper AI. AI video models from China and Germany are also releasing but those models are for researchers or film makers who can highly understand AI and Computer Engineering which most of the traditional film makers aren’t. This paper will focus on AI video models from US, which is public and generally used.(*1)

This paper will focus on ‘AI in film June 2024’ and to be specific, this paper intends to discuss how the film industry and film studies could view AI. Awared that there are many problems with AI in film, this paper will discuss the issues with AI and how AI can provide innovation to the film industry, which is in an existential crisis due to being overshadowed by new media such as YouTube, Instagram, and TikTok. Covid pandemic was a disaster to film industry. Not only has there been a dramatic decline in theater audiences, but the form of viewing has also changed since pandemic. Instead of the traditional long-form films viewed in theaters, audiences have shifted towards short-form videos on platforms such as TikTok, YouTube and Instagram. Those platforms offer contents that can be viewed on mobile devices like Iphone and laptop. Once a cultural shift occurs, it seldom reverts back to how it was before. This presents a significant challenge for the film industry. And at the end of the pandemic, ‘AI’ has surfaced.

2. AI Video and AI Film Today

It seems early to talk about AI film today. However, when we go back to 2023 and see how AI video was, especially a video of ‘Sam Smith eating pasta’, which hallucination was badly shown (to be honest, it was awful), AI videos in 2024 has dramatically improved ‘a lot’. After Open AI released their demo videos of Sora, AI generated art which was focused on just image generating has broadened to AI video.(*2) It is true that by using AI video generating service in 2024 has lots of limits to make video over a minute. Most of the AI video makers extract 2-4 seconds of videos and edit in Premiere Pro(Adobe) or Final Cut Pro(Apple). By using Stable Diffusion, filmmakers can generate videos over 30 seconds, but still continuity in video, which is extremely important in filmmaking still matters a lot. We could find some AI films in Youtube, but to be honest, it is hard to say those videos as ‘film’. It is more to be a ‘trailer’ than film.

There are films which used AI such as The Irishman(2019, Martin Scorsese). The Irishman used AI for de-aging effect.(Kim, 2022: 33-46) Peter Jackson’s ‘The Lord of the Rings’ trilogy used ‘Motion Capture’ to make ‘Gollum’ realistic. Kim says the motion capture which The Lord of the Rings used as sort of ‘AI.’ James Cameron’s film Avatar used ‘E-Motion Capture’, using data of actor’s face movement to overcome ‘uncanny valley’, which was a huge technological advancement compared to motion capture The Lords of the Rings used.(Kim, 2022: 34-36) Of course, we have to regard that The Lords of the Rings is a film in ‘2001’. With this flow, The Irishman could make realistic effect, without using ‘green screen’ or ‘markers’ which disturb actors’ realistic acting. However the technology those films used could be seen as ‘VFX’, and it's hard to consider it as ‘AI-generated’. Which means that those films could not be called as ‘AI generated’ film or AI film. Some short films using AI video could be found but still there isn’t AI film which the public could recognize. Most of the AI videos or films are focused on ‘visualizing’, not ‘storytelling’. Because of the continuity issue, film makers today have trouble making AI film, which we could say as ‘film’. It is yet to talk about ‘AI Cimena’ in June 2024.

Also, because of the continuity issue, most of the AI videos are animation rather than realistic and cinematic. Most of the open-source AI video generating models are focused on ‘animation’ than realistic and cinematic video. When we watch realistic and cinematic videos, hallucination and continuity matter a lot. However, people tend to be or, to be honest, could be more generous to hallucination and continuity issues in animations than in realistic films. It’s because people usually receive animation as ‘fake’ or ‘not-real’. On the other hand, realistic and cinematic films are expected to portray the ‘real’ world, which hallucination and dis-continuity issues aren’t allowed and can break immersion. Everyday, Every week, new models surface from no where. And the previous issues are getting better everyday, every week. Generative AI video model just popped out in June 2024 from China(For now_June 2024_ only Main Land Chinese could use the model), Luma AI also released their ‘img2vid’(change input image into AI video) and ‘vid2vid’(video to AI video) model. Things are going fast and the speed of development is making AI Video scene into chaos like what is going on in LLM(Large Language Model) scene which OpenAI, Anthropic and Meta(*3) is competing.