top of page

A.I., Fauxtography, Metaverse, Mayhem: Understanding The World Changing In Our Very Hands.

TL;DR: Amid the golden illusion of perfect photos lies the unsettling reality of AI's deep intrusion into our lives, cleverly masking biases and reshaping professions. As Nicole Nguyen's WSJ critique and real-world AI applications reveal, we're at a juncture where AI's generative capabilities enchant with creativity while equally alarming with deepfake deceptions, even influencing legal outcomes. This duality prompts a societal reevaluation, demanding heightened skepticism and ethical mindfulness as AI's invisible hand quietly molds our perception of truth, professionalism, and reality itself.

The Photo That Never Happened.

It’s that magical golden hour on the beach, just before sunset, with everyone smiling and centered and not another soul around. It’s a perfect photo to put over the fireplace. Only that moment never happened. It was actually midday, there were random strangers in the shot, and there’s no one image where all the family members are smiling and looking at the camera. “Photo editing” has become “photo manipulation.”

Nicole Nguyen kicked off her October 4 article for the Wall Street Journal with this vivid depiction. Just a few days earlier, Lex Fridman—the renowned interviewer, podcast personality, and Machine Learning engineer—interviewed Meta (formerly Facebook)’s Mark Zuckerberg, presenting both as hyper-realistic talking heads in the Metaverse. Only last week—at the start of a bloody conflict between Israel and Hamas—reports have circulated that certain images of war horrors may have been generated by AI.

These Three examples all present the same profound idea: technology is drastically merging the real world with the artificial; and it’s not happening in some distant future, it’s happening now. As we navigate the final stretch of a year profoundly shaped by AI advances—so much so that I wouldn’t be shocked if AI earns the title of Time's "Person of the Year" for 2023—pioneers like Google, OpenAI, Meta, and others compel us to ask: "Where is all this progress leading?"

Up until now, the AI and augmented reality debate (spanning both digital realities like Zuckerberg’s Metaverse and our tangible, physical world) has largely oscillated between the "progress is progress" enthusiasts and the "AI is going to devour us" skeptics. Discussions revolve around the futuristic vision—a world dominated by General AI, 3-day work weeks, and pervasive autonomy. Yet, scant attention is paid to the present moment: the palpable ways in which AI is reshaping our world right now, subtly altering the ground beneath our feet (or the devices in our hands).

In this article, we’ll delve into Generative & Analytical AI and explore their implications for you, me, and everyone in between.

Analytical AI (AAI).

Analytical AI (AAI) encompasses a range of subfields within artificial intelligence (AI) that aim to enable machines to understand, interpret, and classify human language, images, or other types of input data. The most prevalent forms of AAI include Natural Language Processing (NLP) and Image Recognition.

For AAI to operate, data is segmented into smaller units, which are then parsed to identify both the subjects/objects and the entities. This process allows the system to recognize what's present in the image or discern what's being conveyed. To enhance AAI capabilities, machine/deep learning algorithms are layered over this basic structure. These algorithms amplify the volume of data parsed during the decision-making phase and refine the resultant output. This advanced architecture explains why platforms like OpenAI’s ChatGPT can sift through user input with such pinpoint accuracy, discerning specific requests and even inferring meaning from very poorly structured content (something I am personally guilty of).

On a typical day, many individuals actively engage with NLP through tools like Google Translate or by interacting with virtual assistants such as Siri or "OK-Google." Behind the scenes, NLP also plays a pivotal role in functions like sentiment analysis from text & speech, processing complex datasets, interpreting technical diagrams, deciphering code, and even categorizing resumes.

Yet, despite these advancements, the majority remain in the dark about where Analytical AI is deployed. A recent study by Pew revealed that "only three-in-ten U.S. adults could correctly identify all six AI applications mentioned in the survey." These applications were fitness trackers, chatbots, product recommendations, security cameras (like Ring), music playlist generation, and spam email filtering. Because AAI has subtly integrated into our daily lives, built upon decades of data collection by giants like Google and Meta, it seems we've either been oblivious to its expanding influence or viewed its emergence as so benign that it scarcely caught our notice.

While AAI generally operates behind the scenes and often serves as a foundation for more "show-stopping" AI use cases, AAI is not without its issues, and these problems can ripple into substantial downstream consequences. These consequences can exacerbate pre-existing societal problems or further negative trends. A poignant illustration of this is how AAI can inadvertently amplify biases, integrating them into subsequent decision-making processes. Amazon grappled with this reality in 2015 when attempting to employ NLP for resume categorization:

Amazon’s automated resume screening for selecting the top job candidates turned out to be discriminating against women in 2015. Amazon used resume samples of job candidates from a 10-year period to train its recruitment models. This supervised downstream NLP application learned how to score candidates by computing the patterns in previous resume samples from Amazon and respective information regarding the success level of the job candidate. As a result, the trained model learned the historical trends associated with employment at Amazon by discovering linguistic patterns on resumes. Women were underrepresented in the training set collected from employees. Consequently, the resume screening model associated men and the linguistic signals on their resumes with successful employment at Amazon, whereas resumes of candidates which contained words associated with women were frequently discarded by the algorithm. The biased patterns learned by the model led to discrimination against female job candidates. Amazon soon abandoned the automated recruitment tool after they discovered the bias. - Brookings

This isn't a novel issue. Biased categorization based on past data was central to a British medical school admissions scandal. Applicants with non-European names found their submissions ranked behind those with European names—and that was in 1988!

Amazon's case is merely the tip of the iceberg. For every instance like Amazon that comes forward with findings about unintentional bias (or outright inaccuracies) in AAI categorization, countless others exist. Examples range from the Dutch welfare scandal of 2020 to the UC Berkeley study which discovered a mortgage algorithm that charged Black families higher interest rates. Yet, for every documented case, countless other instances either never see the light of day or become submerged in the overwhelming volume of research on the subject.

Generative AI (GAI).

Generative AI, not to be confused with AGI (Artificial General Intelligence), pertains to a subset of artificial intelligence models capable of generating new content. This content spans a variety: from images, music, text, and videos to even 3D models. Unlike Analytical AI (AAI), which classifies input data, the chief goal of generative models is to produce new data that can respond to or emulate inputs.

Broadly speaking, these models are trained on extensive data using Analytical AI — be it photographs or text. This helps them grasp the patterns, features, and structures intrinsic to that data. Once aptly trained, these models can churn out new data that mirrors, completes, or augments the original dataset.

Technologies like ChatGPT, Bard, Dall-E, and Midjourney are prime examples of GAI. This very technology is being described by the media with fervent enthusiasm, and is the central concern in Nicole Nguyen's Wall St Journal article referenced earlier.

These technologies aren't inherently "bad." In fact, they are resourceful and entertaining tools with myriad applications: from aiding human tasks like data analysis and code troubleshooting to brainstorming concepts, product visuals, and design ideation. Deloitte, for instance, enumerated a thorough list of GAI use-cases, finding 60 in total. And that's likely not exhaustive.

GAI is not only useful but also immensely enjoyable to engage with. Those who have experienced ChatGPT or Midjourney firsthand know that the technology allows for the creation of the most fantastical things. As Nicole Ngyuen observes, in today's digital age, "this kind of work can be done by anyone, with a few quick phone taps." However, if we find it simple to craft false images or mimic voices using nothing more than our smartphones, it's undoubtedly easier for someone with a determined agenda to deceive, slander, or swindle.

A recent Europol report, referenced by The Economic Times, outlined an incident where a corporate employee was tricked into transferring €33 million into a deceptive account, seduced by a deepfake call that replicated the voice of the company's CEO. This example underscores the ease with which individuals can be exploited through advanced technology.

Yet, GAI's ramifications aren't solely drastic or illicit. In professional spheres, tools like ChatGPT — and other specialized systems that may not be household names — are revolutionizing tasks, be it rapidly drafting HTML and CSS code, accelerating sales correspondence, or automating initial customer service interactions. A 2023 study by Fishbowl disclosed that nearly half of the respondents employed an AGI tool like Chat-GPT for their tasks (with 70% keeping it hushed from their supervisors).

The reservations about potential job losses due to this tech are vociferously expressed. There's no dearth of articles prophesying how GAI might diminish workforce needs or render entire professions obsolete. If debugging with ChatGPT expedites development, it indirectly lessens the demand for software developers. Similarly, GAI-driven UX design could question the need for UX designers or product artists. The recent writers' strike emphasized establishing guidelines around GAI's use, ensuring credits and compensations aren't compromised.

In the end, while current professionals may hold their ground, the overarching trend suggests fewer experts in the future, achieving better results faster.

To conclude, let me share a personal story:

I was recently engaged in a conversation with a number of lawyers and technologists. One of the technologists—I’ll call him Harry—was describing how he had used ChatGPT to write legal letters when negotiating a trademarking case between the USPTO and a third party (who had petitioned against Harry’s trademark application). The Lawyers in the room laughed and scoffed at him; “That’s probably not a good idea” said one; “Yes, I don’t think the USPTO is going to like you trying to muddle your way through the process like that” said another. “You really should get a lawyer,” they all agreed. Harry then responded “But, I won the case.” The smile quickly disappeared from the faces of every lawyer in the room.

Where Does It End?

The short answer is that no one truly knows. In the future, we will most likely witness a reduction in the workforce coupled with an increase in productivity, much like the industrial revolutions of previous centuries. Some jobs will disappear entirely, such as that of the train fireman, a role rendered obsolete by diesel and electric propulsion. However, other work will simply be augmented, like the capacity for music producers to create music more efficiently through computer-based studio software, or the expansion of the journalism profession facilitated by platforms like Medium, YouTube, or TikTok.

In the present, though, we need to cultivate a more robust skepticism about what we read, see, and hear. In a world now navigating through wars, economic woes, and—in the USA—a looming election cycle, the chances for manipulation by malicious parties are more abundant than ever, with significantly fewer obstacles. Indeed, deepfake images—like the one that misled many into believing that Pope Francis is a fan of Balenciaga—can be amusing, but they could also convincingly deceive thousands, if not millions, making them believe a political candidate uttered something egregious they never did, or used to fabricate incriminating recordings of events that never took place.

Therefore, stay vigilant. Question your immediate impulse to believe something you see—particularly when it aligns with your cognitive biases, and above all, familiarize yourself with the technology. By learning to use ChatGPT, Midjourney, and other similar services, you'll understand their capabilities and limitations, and in doing so, adjust your own propensity to believe their output. To quote Sun Tzu: "Know thy enemy and know thyself; in a hundred battles, you will never be defeated."



15 views0 comments

Commentaires


bottom of page