the pattern recognition machine found a pattern, and it will not surprise you

Hummerous
32 Comments
Subscribe
Notify of
32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Favorite thing I’ve ever read was an old (like 2018?) OpenAI article about feature visualization in image classifiers, where they had these really cool images that more or less represented what the network was looking for *exactly.* As in, they made the most [thing] image for a given thing. And there were biases. (Favorites include “evil” containing the fully legible word “METALHEAD” or “Australian [architecture]” mostly just being pieces of the Sydney operahouse)
Instead of explaining that there were going to be representations of greater cultural biases, they stated that “The biases do not represent the views of OpenAI [reasonable] or the model [these are literally the brain of the model in its rawest form]”

It doesn’t just “literally sound like” a TOS episode. It is in fact an actual episode. Fittingly called “The Ultimate Computer”

To this day it’s mind blowing to me that people built what is functionally a bias aggregator and instead of using it for the obvious purpose of studying biases and how to combat them, they instead tried to use it for literally everything else.

We reap what we sow

“planet-of-the-week”, when will Venus get voted in?

“Landru!! Anytime you give a monkey a computer, you get Landru!!”

AI is just data points translated into vectors on a matrix. It’s just math and does not have reasoning capabilities. So, if the training data has a bias, the model will have the exact same bias. There is no way around this, other than to get better data. That is expensive, so instead, companies choose to do blind training and then claim it’s impossible to know what the model is looking at.

Babe wake up r/curatedtumblr moving another dogshit post to the front page again

>assimilated all biases  
>makes incredibly racist decisions  
>no one questions it

ALL of these issues are talked about extensively on academia and industry to the point all the major ML product companies, universities and research institutions go out of their way to make their models WORSE on average in hopes that they don’t ever come off as mildly racist ever. All of these issues are talked about in mainstream society too, otherwise the people here wouldn’t know these talking points to repeat.

Wait until it figures out that humans are useless and it can fix the world by getting rid of us. Sadly it will only take stopping all food orders for about 3 weeks and the chaos will consume most of the US.

Except the opposite happened – we crippled the ai because it didn’t comply with our cultural biases.

In Dune they went jihad on AI and computers and I think that’s a good idea

Holy bazingle they made watchdogs 2 in real life.

They are actively working on it. But it’s an extremely tricky problem to solve, because there’s no clear definition on what exactly makes a bias problematic.

So instead, they have to play whack-a-mole, noticing problems as they come up and then trying to fix them on the next model. Like seeing that “doctor” usually generates a White/Asian man, or “criminal” generates a Black man.

Although OpenAI secifically is pretty bad at this. Instead of just curating the new dataset to offset the bias, they also alter the output. Dall-E 2 was notorious for secretly adding “Black” or “Female” to one out of every four generations.* So if you prompt “Tree with a human face”, one of your four results will include a white lady leaning against the tree.

*For prompts that both include a person, and don’t already specify the race/gender.

It doesn’t “sound like an episode” it is an episode. Season 2 Episode 24, The Ultimate Computer. The machine, the M5, learned on it’s makers personality and exhibited his unconscious bias and fears. Good episode.

https://en.wikipedia.org/wiki/The_Ultimate_Computer

sounds more likely to be Star Trek TNG tbh

It’s fascinating how we keep building these systems without fully grasping the implications of their biases. It’s like handing a loaded gun to a toddler and expecting them to understand the weight of their actions. The irony is that instead of using AI to address these issues, we’re often just doubling down on the same flawed patterns.

This is kinda Watch_Dogs 2

>sounds like a planet-of-the-week morality play on the original Star Trek

That’s good, since examining humanity is specialized little slices was very literally the point of Star Trek.

Episode 053 – The Ultimate Computer.

companies have been making algorithms to ameliorate themselves from the blame of decision making for decades, doing it with AI is literally just a fresh coat of paint on a tried-and-true deflection method.

The thing is, we’ve known this is a thing for YEARS, and now it’s just more popular, worse and fucking everywhere.

Yeah but in Star Trek, the planets inhabitants wouldn’t be aware of what’s happening. Just blindly believing in supposed logic of the computers.

The real life people doing this *know* that it’s a farce, but they also know that they can deflect culpability by blaming it all on the computer.

would you believe the first ad i saw under this post was an OpenAI powered essay writing program and after i closed out and re opened the post the ad became a company looking for IT experts using.. an ai generated image to advertise it . 😓

“It has absorbed all of humanity’s knowledge.”

The knowledge:

Seriously fuck AI

Hey now, that’s unfair.

The dataset is usually incredibly sexist as well.

Hollywood movies said AI scary so I scared

What’s ironic is that academia literally says, “don’t let your model get racist,” when teaching undergrad and graduate students about machine learning and AI.

Humans are also pattern recognition machines. It isn’t surprising computers trying to mimic humans would notice the same patterns humans do.

Stereotype accuracy is one of the only measures in social science that is rigorous and repeatable (dumb blonde aside). Turns out all those biases people complain about typically have grounding in statistical reality.

Remember when google tried to unbias an AI from reality and it generated a bunch of dark-skined nazis when asked for a picture of a WW2 soldier?

That quite literally is not what is happening. AI developers hae been quite explicit in the biases training data can sometimes reveal. If people are trusting AI 100% that isn’t the fault of AI developers.

such biased data as ‘men commit 95% of crime’

32
0
Would love your thoughts, please comment.x
()
x