AI has entered work processes, but workers aren’t prepared equally

AI at work is no longer an abstract adoption story.
Pew Research Center reports that 21% of U.S. workers say at least part of their job is done with artificial intelligence. That number grew from 16% about a year earlier to 21% in 2025.

So it’s no longer enough to talk about AI as a trend, a popular tool, or another new technology. The workplace question is now about how AI enters specific tasks, who controls the output, and where human review still matters.

Artificial intelligence is now common enough in professional tasks that companies need to treat it as part of how work is assigned, reviewed, and approved. In writing, research, planning, reporting, and internal communication, even basic AI literacy can affect how quickly someone can draft, review, organize, or check work.

The useful question is more narrow: how is AI actually being used inside work?

AI should be discussed task by task

Before a company adapts a workflow around artificial intelligence, it has to separate tasks that can be assisted by AI from tasks that still depend on judgment, context, responsibility, or human review.

Frequency also changes the risk. A tool used once a month for brainstorming is different from a tool used every day to draft client emails, summarize meetings, prepare internal reports, or shape public-facing content.

AI can be used for writing, editing, brainstorming, summarizing, and analysis. But these tasks don’t carry the same level of risk. Drafting may benefit from speed, while analysis depends more on source quality, prompt design, and the person reviewing the result.

Pew’s data makes this distinction clearer. The growth is not mainly coming from workers whose jobs are mostly done by AI. The share of workers whose “all or most” work is done with AI remains small, around 2%. The larger growth is in the category of workers who say AI does “some of their work”.

So the change is happening through smaller actions: drafting, summarizing, shortening text, checking wording, collecting ideas, or preparing the first layer of analysis.

The growth of AI is not the same as full automation. It’s more accurate to look at separate tasks and understand where AI helps, where it changes the process, and where its output still needs serious human review.

The same problem appears in writing work. A tool can help at the draft level, but it doesn’t replace the author’s structure, micro-skills, or understanding of the task. I wrote about that distinction here.

AI has become part of work, but it hasn’t become normal for everyone

It would be inaccurate to say artificial intelligence is already normal for all workers.

Many people don’t use AI at all. Some use it only outside work. They may open ChatGPT for a personal question, but still not know how to use AI in a work task, where the output may affect clients, coworkers, internal decisions, or public communication.

Pew also reports that 65% of workers still don’t use AI much or at all in their work.

That means many workers may not know how to write prompts, choose a tool, review an output, protect sensitive information, or decide whether AI fits the task at all.

It is a second issue: professional use of generative AI is unevenly distributed. Among people with a bachelor’s degree or more, professional AI use is higher than among people with lower levels of education.

The workplace conversation shouldn’t stop at adoption. The harder issue is whether workers know how to use AI inside real work tasks. Many workers are now expected to function around AI before they fully understand how to use it well.

Employees may use AI before company rules catch up

Employees can use AI in ways their company hasn’t formally adopted, reviewed, or explained. Sometimes workers don’t know the company’s position on AI use at all.

In some companies, AI is already used in internal tasks without a clear explanation of where and how. An employee may also use AI on their own and still not know whether that use fits company rules.

Gallup’s data shows how unclear this can become inside organizations. In Q3 2025, 37% of employees said their organization had adopted AI for productivity, efficiency, and quality, but 23% said they didn’t know whether their organization had adopted AI.

A policy can’t guide behavior if employees don’t know whether the policy exists, what it allows, or what it prohibits.

The issue is not only whether a company officially uses AI. The company also needs to know whether employees are putting client data, research notes, internal documents, draft decisions, or public-facing content into AI tools.

This is where internal AI rules stop being abstract.

Companies need policies that separate personal experimentation from work done on behalf of the company, especially when the task involves clients, confidential information, public content, or decisions that affect other people.

AI policies need to define the actual work

When an employee uses AI personally, that use may not always match company policy.

A useful policy should answer the questions employees actually face while doing the work:

  • which AI tools are allowed

  • when AI use is appropriate

  • which tasks can be assisted with AI

  • what data cannot be entered into AI tools

  • where human review is required

A policy built only around bans and permissions can miss the actual risk. The risk usually depends on the task: what the employee is doing, what information is involved, what kind of output is produced, and who approves the result.

AI can’t be regulated through overly broad language.

Companies need to define the task, the context, the data involved, the risk level, and who owns the final output. Otherwise, employees still won’t know what they can do safely.

It is related to a problem: adoption can grow without real workflow redesign. In Q1 2026, half of employed American adults said they use AI in their role at least a few times a year, but Gallup also notes that employees report productivity gains, not a fundamental change in how work gets done.

Tool access is not the same as a workflow. There is a real difference between “people use AI” and “the company knows how that use should happen.”

AI access can become part of workplace inequality

The same problem appears through access.

There are many tools. New ones appear constantly. Not every worker has access to the same tools, and not every worker receives enough training to use them well.

AI productivity gains are more available to workers whose jobs already involve writing, analysis, research, planning, or digital tools. Professional AI use with education and income. In that sense, AI adoption is also about access, skills, and type of work.

That means the ability to work with AI can become part of professional inequality.

Some employees already use AI as a regular work tool. Others still don’t know how to approach it even in simple tasks.

Buying a tool or allowing AI use doesn’t solve the gap by itself. Companies need training tied to real roles: what a marketer can use AI for, what a manager can review with it, what an analyst should not delegate, and when someone has to check the output before it becomes part of the work.

AI workflows still need human review

AI should be tested against the human version of the task: what it gets right, what it misses, and what kind of review it needs.

Leadership has to decide who reviews AI output, who owns mistakes, and which tasks are too sensitive to delegate.

Even strong AI tools still need a person to evaluate the result. AI can help with drafts, summaries, first-pass organization, and routine wording checks, but it can still produce inaccurate or incomplete output. Human review should remain a required part of any work process where artificial intelligence is used. This is also close to the problem I described in my article: AI-assisted writing becomes weak when the writer gives away meaning-level decisions.

The productivity story also has a limit. Brookings reports that only 19% of all respondents said AI increased their productivity in daily tasks, and only 4% said productivity increased significantly.

This doesn’t mean AI doesn’t help. It means access to the tool alone doesn’t guarantee a strong result.

The result depends on the task, the worker’s skill, the review process, the context, and who takes responsibility for the final decision.

This is the part companies can’t skip. More AI use doesn’t automatically create better work. The useful decisions are narrower: which tasks AI can support, what must be reviewed, and who remains accountable for the final output.

Next
Next

Thought Leadership Without a Concrete Problem Becomes Static Content