Why Data Scientists Actually Need History (Even If You Think You Don’t)


If you work with data for a living, you have probably noticed how easy it is to get sucked into the technical side of things. New model? Cool. Faster GPU? Even better. But here is the thing most people never pause to consider: all that code you write is sitting on top of years—sometimes centuries—of human decisions, mistakes, biases, and plain old messiness. And if you ignore that part, your “smart” system can end up repeating the same problems people created long before Python existed.

This is not some abstract philosophy talk. It is happening right now.

A few folks in the digital humanities space have been trying to shake some sense into the tech world, and honestly, their message hits harder than most AI conference keynotes. At a UCLA event on October 14, three researchers—Lauren Klein, Roopika Risam, and Cindy Anh Nguyen—basically said the quiet part out loud: AI and data science are not as clean or objective as we pretend. They carry history inside them, whether we acknowledge it or not.

And if we want to build systems that do not screw people over, then yes, data scientists need to care about history.


The Problem With Pretending Data Is “Neutral”

Here is the uncomfortable truth: your model does not care where the data came from. It does not ask why some groups are overrepresented and others are barely visible. It does not know what power structures shaped who got counted and who got ignored. You feed it numbers, it spits out results. That is all.

But to be honest, this might sound confusing because we tend to think of data as facts. You know, objective, clean, mathematical. But once you start digging into how datasets are actually created—census records, surveys, online behavior logs—you’ll notice that the human fingerprints are everywhere.

That is exactly why Klein, Risam, and Nguyen are pushing for a more human-centered approach. Not because it sounds nice, but because ignoring context produces garbage decisions at scale.


Why History Is Not Just “Extra Reading”

Lauren Klein put it bluntly: humanistic theories can and should influence technical work. That might sound weird at first—like, why bring philosophy or history into machine learning? But if you think about it for a minute, it starts to make sense.

Humanities give you tools to understand people. And data is basically people, disguised as numbers.

Klein said something worth sitting with for a moment: understanding how histories overlap is not about feeling guilty or apologizing for old mistakes. It is about learning from them so you do not build systems that make the exact same mistakes again.

In other words, history is a debugging tool. For society.


Visual Data Is Not Just “Pretty Charts”

Roopika Risam took the conversation in a direction more data folks should pay attention to: visualization. Most of us treat visualization like decoration, something you slap onto a dashboard to “make it look nicer.” But Risam pushed a deeper idea—visuals shape interpretation. They turn raw data into a story.

And once you realize visuals are storytelling, you also realize they carry meaning, emotion, and bias just as much as any written narrative.

Risam’s work on colonialism and digital culture showed how historical power structures still shape the modern tech landscape. And if that sounds too big-picture, think of it like this: the same types of biases that shaped international systems decades or centuries ago still sneak into how datasets are built and how algorithms behave today.

Her advice was simple: if you are working in tech, you have a responsibility to understand the impact of the tools you use. You cannot keep hiding behind the excuse of “I am just the engineer.”


Slowing Down Might Actually Make You Better at Your Job

Then came Cindy Anh Nguyen with something most data people never think about: slowness. And not “my code is slow” slowness, but intentional slowness—pausing to understand the actual communities behind your data.

It sounds a bit counterintuitive, especially in a field obsessed with scale and speed. But here is the thing… slowness gives you context. And context gives you accuracy.

Nguyen described “contextualized slowness” as the act of understanding the people, culture, and environment the data comes from. When you skip this step, you end up solving the wrong problem or, worse, creating a new one.

And let’s be real: half the bad AI headlines out there could have been prevented if someone had slowed down long enough to ask, “Wait, does this even make sense for the community affected?”


Students Want This… but Most Programs Do Not Teach It

A student, Sierra Talbert, attended the event because she works at the intersection of computer science and gender studies. Her reaction said a lot about the state of education right now.

She pointed out that there are almost no places where technical and humanistic thinkers can sit down and talk without one side feeling out of place. UCLA’s DataX program is rare because it actually encourages that kind of interdisciplinary thinking.

Most CS and data programs are hyper-focused on algorithms, math, and coding. Nothing wrong with that, but when you remove context from the equation, you produce graduates who can build powerful systems but have absolutely no idea how those systems hit real people.

That knowledge gap shows up everywhere—from biased recruitment models to unfair credit scoring to AI systems that treat entire communities like statistical errors.


The Real Bottom Line: If You Ignore History, You Build Worse Systems

Lauren Klein wrapped up with something the tech world needs to hear more often. When we acknowledge history—really acknowledge it, not just in some corporate-ethics-policy way—we start making more intentional, careful, and accurate decisions.

Better context creates better models. Better models create better outcomes. It is not complicated.

The data scientists who will stand out in the next decade are not the ones who train the biggest model or the fastest one. It is the people who build systems with intention, curiosity, and awareness of how their tools actually impact communities.


So Here’s the Thing…

History is not this dusty, irrelevant appendix to technology. It is the missing half of the equation.

If you are serious about building AI systems that serve people instead of steamrolling them, you have to understand the forces that shaped your data long before you downloaded the CSV.

And no, this is not soft thinking. This is what professional rigor actually looks like.

Check Our CoursesData Science Classroom TrainingPython Classroom Training, Machine Learning Course , Deep Learning Course ,  AI-Deep Learning using TensorFlow , AI Full Stack Online Course , Cyber Security Course in Bangalore , Core Ai Training , Digital Marketing Training , Power BI Training in Bangalore , React Js Training , Devops Training in Bengalore , Microsoft sql Training .