Are the Robots Really Coming for Our Jobs?

Dmitri (Mitya) Miller, Senior Vice President, General Manager, Arcesium

Artificial intelligence brings a fascination unmatched in recent years.

The rise of AI has created an intrigue often associated with futuristic robots taking over human tasks. But while robots (or at least their AI counterparts) may assist with certain tasks, they’re far from replacing us entirely.

Large language models (LLMs) like ChatGPT, GPT4, and GitHub CoPilot are generating text that mimics human writing. Yet, the content lacks a few minor details: the machines are taught to expertly manage information and generate text that a person would believe is plausible. But the systems don't gut-check what they ingest and they’re not able to verify the accuracy of their output.

In situations where precision is key, context and content matter.

AI’s ability to perform depends on the quality of the data it is fed. Poor-quality information, biased data, or outdated knowledge can lead to inaccurate predictions and decisions.

It probably comes as no surprise to hear how respondents in this Playbook ranked their top challenges when using data-led AI solutions: integration with existing systems (56%), data completeness and variety (54%), and data accuracy (50%).

In the financial world, if you ask an LLM to synthesize a trading recommendation (many of the major language models already have warnings), the model can confidently recommend a trade with detailed reasoning. But there’s a major caveat: the model’s suggestions may not be connected to current information.

Similarly, when you ask CoPilot to write code, there’s no programmer “inside the robot.” It draws from repositories of code without understanding the context, authorship, or licensing implications, potentially leading to legal issues.

The declining availability of content to build AI tools is also raising eyebrows. A recent study from the Data Provenance Institute, an M.I.T-led initiative, examined over 14,000 web domains and found an “emerging crises in data consent” as publishers and online platforms increasingly restrict AI tools from crawling their sites and freely harvesting information.

AI, much like robotics, is a work in progress

The models are only as good as the data foundation they’re built on. To build AI systems that are successful and agile, firms must ensure the data fed into these systems is clean, accurate, and reliable. Similar to maintaining a robot to prevent malfunctions, fixing data issues before they enter the AI model improves the accuracy of analytic output, predictions, and the reliability of a system’s overall results.

According to the survey, 86% of respondents reported the demand for data analytics talent has moderately to significantly increased over the past few years. As technology takes on more repetitive tasks, people will be the ones leveling up AI’s output and deciding how — or how not — to use it. An important driver of innovation is when people find that new processes or tools enable them to perform tasks on their own.

As many of us are (hopefully) coming to learn, LLMs work best as augmentation for knowledgeable humans. But they are not a replacement.

Much like how robots assist with repetitive tasks, LLMs help with the structure of similar words, finishing phrases, and providing the raw work to edit. But we must also be grounded in the reality of how the technology works. It isn’t magic. As AI and automation evolve, the future of how we work will not be about replacement, but about transformation. People and tools working together will only elevate each other’s strengths, allowing humans to make decisions that require a unique, personal touch.

Visit our website
Chapter 1
Chapter 2