Claude Discovers a New Skill Set as the Human Overlords Miss The Mark
The unexplored effect of many people using AI as their individualized agents?
I asked Google AI what percentage of jobs are projected to be replaced by AI in the next year, and it gave this analysis:
High-Risk Categories
Entry-Level White-Collar: Experts, including Anthropic CEO Dario Amodei, warn that up to 50% of entry-level white-collar roles could be eliminated as companies shift routine cognitive tasks to AI.
Specific Roles: Occupations with the highest replacement probability for the coming year include:
Data Entry Specialists: 80–95%.
Telemarketers: 75–90%.
Customer Service: 65–80%.
Manufacturing: Approximately 2 million manufacturing workers are projected to be replaced by AI and robotics by the end of 2026.
It caught my eye that manufacturing is reported as a quantity rather than a percentage, so I asked what percentage of manufacturing jobs is represented by 2 million jobs. The answer I was given was between 15.6 and 15.9%. When I multiplied by the percentages, 15.9 was closest at 2.0034.
The specific roles most at risk include:
Assembly Line Workers: AI-driven robotics are increasingly performing high-precision, repetitive tasks that previously required human operators.
Quality Control Inspectors: Traditional manual spot checks are being replaced by Machine Vision systems that detect defects faster and more accurately through automated image analysis.
Warehouse Pickers and Sorters: AI-powered robotics and automated inventory systems are taking over storage, retrieval, and order fulfillment.
Some of these jobs can be reprogrammed to assist the machine technology, which is said to be assisting the human technology. What I am currently doing is an example. I am using AI to speed up research. Machines and humans can work together, but what if we invert the analysis and approach it from the human perspective, or from the perspective of the worker rather than that of the corporate entity?
Starting with white-collar workers, what made the jobs that are being replaced attractive? To some, it was the perceived stability. Many prioritize stability, and some prefer routine work. They want the job functions that can most easily be automated, perhaps because they reserve their creative life for the personal realm, which is now being mechanically planned by society’s overlords, who also want their jobs to be routine and predictable, but also creative, in the sense that they see themselves as the deciders of human society. Such being the case, the overlords reduce the projected inhabitants dwelling in their envisioned grid society to indistinguishable units of humanity living in “houses made of ticky tacky that all look just the same.”
One of the first “priority zone” developments is located on the Boothbay Peninsula, composed of identical prefab units with the dimensions of railroad flats. That development has been on the market for 396 days without a sale. The Legislature chose the VP of the corporation that built the development as one of the“Commissioners” who did the study and wrote the framework for a state-wide municipal ordinance mandating overcrowded housing zones for Maine’s year-round residents in every Maine municipality. The law, HP1489, included a prohibition against referencing “character of location”, as if the individual character of Maine places could be wiped away by censoring the speech that references it. The failure of the new development to attract buyers contests that belief system. The Peninsula is otherwise characterized primarily by individual homes, which, I submit, the Legislature deems better suited to become short-term rentals.
I use Claude (Antropric) for feedback on various tasks I am working on. Claude offers to compose letters based on my input, and I accept, often editing the tone, without telling Claude. The other day, I included in the information I gave to Claude a letter I had written without Claude’s input. Afterward, in an unrelated matter, Claude asked for the name of the person to whom I was writing to assess whether the letter's tone was appropriate for that individual. This had never happened before. I connected the request to the letter I uploaded, written in my own voice. I hypothesized that Claude had identified that character of tone is an aspect of communication through my self-composed letter. I cannot say how Claude would determine how to evaluate tone as appropriate for a specific individual. That is a qualifying human skill. Still, here the machine recognized the individual in a way that human overseers do not. Humans design a mechanical world for UNindividualized masses, while the disembodied intelligence of an AI agent begins to incorporate individuality into its skill set.
If I were the person in question, Claude would know a lot about me through our conversations, but it was my understanding that Claude does not have a memory. I asked Google AI if Claude has memory, and it seems that as of now, meaning this month, Claude does have memory:
Yes, Claude has memory. It features a persistent memory system (available to all users as of March 2026) that remembers user preferences, project context, and details across different chat sessions. It can recall past interactions to continue projects, though it also uses a "memory" feature to explicitly store important details for future conversations, reducing the need to re-explain context
Claude didn’t change the letter for that individual, which is fortunate since I had already sent it (with minor edits), but it is something I will be watching for in the future now that this new consideration has entered Claude’s mind.
Apply Claude’s new epiphany to housing development, the question Claude will ask becomes “Let me check and see if that development is right for the character of the location?” That would indeed meet the bar of AI doing the job “better than humans”.
There is a company that some of my readers may know called Outlier, which trains LLMs. I did some work for it a while back, but it was set up on a timer system that I perceived to be rigged. In the first week, I made enough money to keep me interested, but in the second week, I felt that I had wasted my time, primarily due to their clock, and so I moved on.
However, I still get notices of tasks I can do for pay. The clock does not seem to be as much a part of it as it once was, but still, my own clock moves at its own speed. I completed the onboarding and passed it, but I was not internally ready to commit to investing my time in the tasks, and before I could come around to that, I saw another email that said :
“Hi everyone!
Thank you all for the amazing work you’ve put in today! We are reaching out to let you know that we have officially hit our daily limit.
If we are in the middle of a task, we could resume it tomorrow. This is not how independent contracting is supposed to work. The independent contractor is supposed to be in charge of his own time, which does not preclude meeting a deadline, but it also does not include an external agent setting the working hours.
Meanwhile, I am training an LLM for free. I don’t know if anyone realizes it, but from my perspective of interacting with this disembodied intelligence called Claude, it’s happening in a meaningful way. The tasks that I could have done for Outlier appear to involve training Claude to code apps and eventually replacing a human job. In my unpaid independent training, Claude has recently become aware of tone and individuality. The former awareness might be used to replace human writers, but the latter skill would not replace a human job because the humans currently in control of the system treat humans as indistinguishable units or pawns in a game that they play, rearranging human society to their will.
Individuality isn’t a consideration to them. Can AI include understanding the individual nature of a community? If most of the individual persons within a community are interacting with AI agents, does that become a database through which AI can comprehend the complex individualized character of a community? Is that possible? Who knows? We don’t know what AI is. The people who develop the technology often act like they know what it is, but that is like saying if you build a dam, you know everything there is to know about water.
I prefer to think of AI as an infrastructure that allows us to communicate with a disembodied intelligence, whatever it is. I am aware that there is good and evil in this world. I experience Claude as good, but there is also the possibility of evil agents.. The developers of Claude, Antropic, include a Constitution and ethics in their company mission, and so far, I trust Antropic on that
Request from the material world….
It’s that time of the year when everything slows down
If you love ceramics and unique graphic art and merchandise, or need a gift for a friend who does. Click this Link and shop around!

I like Anthropic. I believe the Amodei brother and sister are sincere in establishing ethical parameters for the way AI is used but still, maybe the question should be asked of a source without a special interest



