This Is Why We Can’t Have Nice Things…
By Alex Pecevich, LCLG Business Development Manager
In an era of “shock and awe” social media timelines and increasingly normalized states of emergency, what would have once been considered an historical extreme now fails to capture the collective attention of our nation for even a weekend news cycle. News today has become a reflection of what we as society deem “newsworthy”. The demand for articles covering the societal impact of lenient marijuana legislation may exist in the desolate message boards of the interweb but it’s not moving the needle with 18-35. Instead of telling us everything that is going wrong in the world, we want to know how our lives are going to be made easier. And please know that the irony of making “easy” lives easier is not lost on me as I’m sure there’s an old timer out there shaking their fist at the heavens saying, “back in my day…!” Still technology prevails and modern examples are seen everyday with Amazon drones that drop packages right on our back porches or the driverless meal delivery that uses ‘app tracking’ to tell me what I want before I even want it. The use of Artificial Intelligence (AI) was born out of a societal need to escape and it’s subtle prevalence has blinded people to the question of who is going to be left in its wake.
And this desire of escapism from reality that younger generations possess should not conjur images of young adults scurrying back to their home bases with their heads down hiding from the negativity. Instead, they simply avoid the gloom and doom altogether. A helpful tool in their arsenal? Comedy and Entertainment. These resources are more accessible than ever as real-life uncomfortable moments can even cause a subconscious response of reaching for your phone just to disconnect. But with the average adult now spending 6 hours and 58 minutes online1, what remains mysteriously unspoken is the concern for how real everything we’re taking in actually is. Or rather, how fake. And if the barrage of stories themselves weren’t enough to dull the senses, the sheer oversaturation of media and ways to ingest information has only served to delegitimize any factual basis that the “truth tellers” might have once had.
Even in writing this post, I couldn’t help but notice the stark contrast our present day has to my own childhood where the presiding notion used to be, “I’ll believe it when I see it”. Essentially, “yes, I’m skeptical of the opposing viewpoint but I am willing to concede if I see enough supporting evidence.” Twenty years ago, people were left with only a handful of options when it came to how they received the news, and they could find solace in the fact that these articles and stories were properly vetted. Those days are now considered a bygone era as our attention spans have shortened and we need information NOW.
Today the easiest way for people to find stories supporting their side of an argument? Artificial Intelligence. The invention of this double-edged sword had critics almost immediately expressing apprehension as they realized how easily news could be artificially manufactured and repurposed. The once held healthy skepticism of the media consumers has now devolved into the indignant shouting of the term “fake news”. People on either side of an argument can now claim audio or video subterfuge (whether it exists or not). We have come a long way from the inception of AI in social media where the early examples were obviously fake and looked like harmless snapchat filters and shoddily constructed Steve Buscemi memes. But now in the era of AI, machine learning and deepfakes, seeing is no longer believing.
For those unfamiliar with this recent terminology, the term “deepfake” was first brought into public discourse around 2017 by an anonymous online persona who pioneered the term to describe his “work” using AI to manipulate pornographic videos. The technology itself was first introduced three years earlier when American Computer Scientist Ian Goodfellow and his team at Lyrebird had a technological breakthrough powered by an innovative new deep learning method known as generative adversarial networks (GANs). In an effort to simplify without over-simplifying; GANS consists of two artificial intelligence agents: one forges an image and the other attempts to detect the forgery. When an agent sees a fake, the forger AI adapts and improves.2
Before the news even made it around the world, computer scientists, programmers and even avid online enthusiasts, began manipulating and altering videos more seamlessly than ever before. Early usage of this technology looked like the brainchild of people’s most outlandish imaginations. A popular example portrayed Mike Tyson and Snoop Dogg overlayed on top of Oprah and Gayle as they reminisced over high school love letters. Skeptics were outspoken from the beginning about the caution with which this technology should be used as it was evident that it was rife for corruption. But the cat was out of the bag and no number of warnings could undue the potential for harm that had already been irrevocably unleashed.
The unavoidable truth is that the future those skeptics once worried about has arrived. One of the first and most outspoken factions of those negatively affected by AI have been the creators. Everyone from actors to musicians to artists are now seeing their work recreated and distributed without their consent. Whether it is using someone’s likeness to recreate a movie scene or dubbing someone’s voice to refurbish a pop song, this conflict against the faceless AI has proven to be difficult and confusing as legal teams now must argue the difference between creation and copyright. Much to the artists dismay, early opinions of judges have stated that this AI generated content is in fact not copyright as the AI is scraping the internet and using billions of data points to “create” what you see and hear3. Needless to say, this dispute between the artists and AI is far from over as some of the more prominent figures with deeper pockets have already begun to appeal. As the courts carry on and legal teams convene, there is another group, not nearly as financially well endowed, who have chosen to speak up against AI with a more direct and involved approach.
This group, who has seen the writing on the wall and seeks to put their collective foot down, is the Writers Guild of America (WGA), a union comprised of over 4,700 writers and media professionals.4. Given how easily AI can be used to create almost anything out of thin air, the WGA is fighting tirelessly to make sure that their next contract with management includes specific language excluding the use of AI in script writing. The effects and publicity of the writer’s strike may just be coming to light now, but the ball began officially rolling on April 3rd of this year when the WGA asked its writers to vote on authorizing a strike and received back an overwhelming 97.85 percent “yes” vote.5
For those unfamiliar, strikes are a collective work stoppage enacted by the workers of an organization typically in response to unaddressed employee grievances. Often seen as a last resort, employees halt production (in this case scripts), in an attempt to impact a company’s bottom line and draw public attention to the workers’ complaints. While advocates on the management side have done their best to muddy the waters as to why this is happening, the WGA has been clear that studios’ use of artificial intelligence is the primary source of contention for the Unions.
As the elected and hired leaders from each side continue to negotiate, the other members of the party are left to sweat it out in hopes that the ongoing lack of income will force the other to give in/not give in to the demands. Much like a civil trial, the winning side is often the one who can afford to hold out the longest. While there is strength in numbers and public opinion, the workers are obviously at a significant disadvantage. With this strike by the WGA nearing 8 months, members have begun to worry that their collective demands will be crushed as their striking brethren in the interim exhaust any personal savings they may have had and subsequently stripped of their basic human needs. And even though the physical striking lines are taking place in Hollywood, the fears harbored by the workers are not being ironically over dramatized. One studio executive was even reported saying that “the endgame is to allow things to drag on until union members start losing their apartments and losing their houses.6
While it is the WGA that has the loudest voice, alternative versions of power also lie in the hands of some onlooking public figures. Despite management appearing too big to fail, the Democratic Senator from Pennsylvania, John Fetterman has refused to sit on his hands. Last month, with the support of several notable colleagues, he introduced The Food Secure Strikers Act of 2023. This legislation would serve to amend the Food and Nutrition Act of 2008 that enacted a “restriction on striking workers receiving SNAP (Supplemental Nutrition Assistance Program) benefits, protect food stamp eligibility for public-sector workers fired for striking, and clarify that any income-eligible household can receive SNAP benefits even if a member of that household is on strike.7
The Fetterman bill “will eliminate the need for workers to choose between fighting for fair working conditions and putting food on the table for their families”.8 If passed, this legislation would set a precedent for future workers considering participating in a strike as they would be able to simultaneously receive government assistance while standing up for their own rights as workers. For too long the most effective negotiating tool that management has had is that impoverished laborers simply can’t afford not to work as each forfeited paycheck brings them closer and closer to poverty and starvation.
But let’s not rejoice just yet. It’s almost a certainty that this bill will receive pushback from others across the party aisle as a unanimous decision for any bill is seldom to none. If there’s one thing that we can all agree on, it’s that we remain divided. Even the idea of humanitarianism has dissidents. But all hope is not lost. Artificial Intelligence is a long way from the “takeover” that the doomsday preppers have been warning us about. As this technology is still in its infancy, we must now take it upon ourselves to make sure that the wants of the many do not outweigh the needs of the few. The healthy distraction that AI provides might have been able to briefly supplant the fear about what the societal economic fallout might look like in the future, but in order for anything to change we must at least start the conversation. Whether it is between a democrat and republican or labor and management.
Nowadays it seems to be a rarity that people actually switch their ideological positions on a topic but with the invention of AI and deepfakes we won’t have to. Instead, we’ll rest on our laurels and posit that those opposing numbers and statistics are a byproduct of some kind of fake news. As Albert Einstein once said, “it has become appallingly obvious that our technology has exceeded humanity”. Any fight waged in an effort to prevent this progress is futile. But much like the WGA, we still need to use our agency to police it. Otherwise, like our social media timeline, the blurred line between real and fake will become so obsolete that the reality is whatever we as consumers want it to be.
If you or anyone you know has questions about your labor and employment rights, please give us a call @ (206) 926-6700 or visit our website at wwww.lemonidislaw.com more information.