top of page

Artwork By AI

Updated: Nov 27, 2023

The photo at the top of the Fall 2023 Off The Pews Newsletter was created using Adobe's Firefly AI image generator using the input "Thanksgiving table set for 12 with fall colors." At first glance, it is just what I asked for (even though I used a few too many words). Take a closer look, however, and you begin to see the strange chairs, unbalanced candle arrangement, odd plate assortment, and other idiosyncrasies.

AI has been a part of our lives for decades, but only within the last several years has there been more mainstream conversation about it, but those conversations are lagging behind development. That has never been a great recipe for humanity.

This article is a deeper dive into some of AI's issues.

The Hot Button Topics Surrounding AI

  • Ethics - bias in data sets and creative infringement

  • Oversight - Are humans in the loop?

Before we get into the hot button topics, let's define AI:

Artificial intelligence is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.

Read more on Wikipedia

Think of AI as a small school child who daily absorbs new lessons in class, at home, in church, and by playing games with friends. Every experience the child has, creates a data set from which they draw guidance for future interaction with the world.

AI is fed data sets, given a set of rules (algorithms), and asked for a result.

The school child analogy above is an over-simplification of the process and the complexities involved. At its core, however, artificial intelligence is neither good nor bad. The quality of the "upbringing", the programming and the data sets provided to the AI system, ultimately determine the benefit or detriment of the services delivered by AI over time.

Let's look at just one example of AI in action, the intentions and the unintended outcomes. MIT Technology Review published an article in June 2021 discussing efforts LinkedIn made in their job matching AI algorithms:

Most matching engines are optimized to generate applications, says John Jersin, the former vice president of product management at LinkedIn. These systems base their recommendations on three categories of data: information the user provides directly to the platform; data assigned to the user based on others with similar skill sets, experiences, and interests; and behavioral data, like how often a user responds to messages or interacts with job postings.
In LinkedIn’s case, these algorithms exclude a person’s name, age, gender, and race, because including these characteristics can contribute to bias in automated processes. But Jersin’s team found that even so, the service’s algorithms could still detect behavioral patterns exhibited by groups with particular gender identities. For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the position’s requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women.
“You might be recommending, for example, more senior jobs to one group of people than another, even if they’re qualified at the same level,” Jersin says. “Those people might not get exposed to the same opportunities. And that’s really the impact that we’re talking about here.” Men also include more skills on their résumés at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform.
To address such issues, Jersin and his team at LinkedIn built a new AI designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.

I would be curious to know if the AI model has remained permanently "corrected" or if continued input of data has caused the AI model to wander again into biased results. We know we have not rid the world of bias, so presumably AI models will continually require adjustment as new biased data sets are fed into the models.

Ethics: Bias and Creative Infringement

When we're talking about ethics where AI is concerned, we are talking about bias carried out on a systemic scale and, on the human level, creative infringement.

Our cultures and, therefore, the data extracted from those cultures are biased. Discrimination and under-representation of many varieties are baked into the numbers, and AI results are then skewed based on the questionable data.

Back to Our Opening Photo and an Oversight Issue

Our newsletter opened with a photo generated by Adobe's Firefly AI image generator which is a fledgling AI model under development—not even a toddler yet. This AI was created to take text phrases and create an image using the text prompt.

Full disclosure: I am an Adobe subscriber, and I have only used the Firefly image generator, though there are others on the market, so my experience is limited.

Stepping away from the photo quality issue (Firefly will continue to learn and improve this function), there are the ethical issues:

  • From where/whom does it draw its source material? Adobe has taken an ethical step in the development of this model because "Firefly was trained only on open source images, content that is no longer in copyright, and content from Adobe Stock," according to "Adobe Firefly: everything you need to know about the AI image generator" by Ken Coleman on, whereas other AI image generators are permitted to graze the internet freely to use any artwork without consent from the creator.

  • Will artists get paid or credited for their contribution to AI generated art? Does the AI get the credit? Adobe Stock comes from artists who've uploaded their photos in the hope of getting paid for their art. Apparently, this is an issue to work out later.

Where AI generated art is in use, the complete workflow relies on a human noting that the origin of the presented artwork is AI generated. I would like to think everyone will be transparent about the origin of the artwork they post or publish, but copyright infringement is rampant on the internet so I'm skeptical.

Additionally, when I used Firefly to generate the image, I was hoping there would be information in the file's metadata listing the source material used and the creators names or handles. There is not. The file name "Firefly thanksgiving table set for 12 with fall colors 55589.jpg" reveals its origin, and Adobe has a XMP file details of which are available in File Info->Raw Data with information serving Adobe, but it has not generated any of the other fields of information in File Info that would be helpful in crediting the original creators of the photos sourced.

This is an oversight issue, and I happen to know AIGA (American Institute of Graphic Artists) encourages members to fully disclose of AI assisted content. They also encourage members to give credit to other creators. In fact, creators provide credit lines to be used with their art. Unfortunately, there is no body or practice in place to enforce the practice.

Our culture has become entirely too comfortable with copyright infringement because of social media, and as a result it is a normalized process. That doesn't mean it is right or just, it simply means the scale of the infringement it too large to prosecute.


It may be clear by what we've read so far, from so many sources, that watching over the performance of an AI model is necessary. How many users currently relying on the work of AI are aware of the issues, or have the knowledge to do anything to correct an errant system?

AI models, like people, can be working on the basis of bad or incomplete information. The problems arise in the space when AI and the human condition meet. Oversight by the organizations deploying the AI need to monitor a model's performance.

Part of the issue here is that human decision makers in the loop have been trained to deploy a software package that does A, B, and C reliably. If C gets a little wonky, there will be a patch for that. These software packages are static in their function, and we are accustomed to upgrading them regularly when there are bugs or new features. AI doesn't wait for human programmers to update how it works, it takes in new input (data sets) and changes how it functions all by itself—that's the "intelligence" we're talking about.

Putting humans in the loop means every now and then, humans need to check on whether or not the AI model is still on the right track of development.

Additionally, there need to be human-to-human feedback controls in place to address outlier cases that AI just doesn't have the data to consider, errors in standard measures, and injustice and discrimination.

An Example:

I have an old car. It came off the assembly line in 2001. A few years after I bought it I started taking the train to work. I've maintained it, driven it gently, and it has a garage protecting it from the elements, so my old car is in good shape with low mileage.

A few years ago, I consulted the Blue Book to assess its value. I plugged in all the details leaving no empty fields. The system returned "no results found." My car is an outlier. There are too few cases of low mileage, 2001 vehicles still in service, so it is an invisible unicorn by Blue Book standards.

The data sets employed by the Blue Book AI simply don't have cars with my data points on record. The AI doesn't learn from the entire world of possibility, it learns from the commonly reported data set it is fed.

Fortunately, I can still drive my old car onto a car sales lot to prove it exists and runs. However, I am a woman (this may or may not be strike one), my knowledge of car values is not deep (certainly a strike ), and I can not use the bias-free Blue Book AI system to ascertain the car's value before I go to the lot which puts me at a weakness in negotiations (certainly another strike).

The AI model needs updated parameters to take my unicorn into consideration and render a fair valuation. This is a low-level, doesn't-matter-too-much, example of an error in a non-critical AI system. But look at the ease with which my case was ignored, a simple "no results found" and I was dismissed. It is not a malicious system, but in human terms it is a little neglectful and it certainly doesn't have the capacity to serve me.

What happens to people when when they are dismissed from consideration for life's necessities because they fall outside the expected statistical norms of an AI model?

Unfortunately, AI systems that are barely more educated than toddlers have been given gatekeeper roles in multiple industries, and people are slipping through the cracks.

Reasons for Hope

After reading all of these articles outlining the flaws in AI, it is easy to be discouraged that these complex systems will forever worsen the human experience.

There is reason for hope! The flaws in the system(s) have been identified, and steps to correct the problems are being researched and implemented. The UNESCO Recommendations publication mentioned above is just one cause for hope. On October 30th, President Biden signed an executive order regarding the "safe and responsible development of AI." has a fantastic list of AI organizations with descriptions of their focus of development and oversight.

In our own neighborhood, The University of Chicago Booth School of Business, Center for Applied Artificial Intelligence has established an Algorithmic Bias Initiative, "Sharing research insights with healthcare providers, payers, vendors, and regulators to help identify and mitigate bias in commonly used healthcare algorithms." They published a Playbook intended for C-suite leaders, technical teams, and regulators hoping to mitigate bias in their AI models. Their focus has been on providing more equitable health care to everyone.

Cause for most hope is that the conversations around AI models are becoming part of the mainstream. The more information made available the better.

And another cause for hope is that there are still humans in the systems where AI has been deployed to help. On October 26th, PBS covered a human-centered and human-driven initiative to help people clear bench warrants for minor infractions, thereby keeping people out of jail who really shouldn't be there. Many AI models are put to work for scalability purposes—systems are overloaded and there simply isn't enough time or money to deal with the overload. Some large scale systemic problems absolutely need to be solved on the human level using the subtlety of human intelligence.

There is room for creativity in developing the solutions (high- and low-tech) to the problems we face.

In closing, here's just one more article with feedback from 50 notable individuals on the question, "What are the hardest problems at the intersection of technology and society that deserve more attention?" What's Next in Tech, weekly newsletter of MIT Technology Review, LinkedIn, published Nov. 7 2023:

Bill Gates, philanthropist, investor, and former CEO of Microsoft: Technological innovation is one of the most powerful tools we have to address the world’s toughest challenges, especially in the areas of health, development, climate, and education. But there’s not nearly enough focus on making its benefits available to everyone. Now, as we consider the potential for AI to improve life for millions of people around the world, more attention is needed on responsible and equitable development, so tools may be delivered by and for those who need it most.

10 views0 comments

Recent Posts

See All


bottom of page