Friday, May 8, 2026

The Future of AI and Accountants: Which Finance Jobs Are Safe and Which Are Gone

Will AI Replace Humans in Finance and Accounting?

Routine bookkeeping faces an 85% automation risk. Complex financial advisory work faces under 25%. Those two numbers tell the story of what is happening to the accounting and finance profession more clearly than any broader generalisation. AI is not replacing accountants — it is splitting the profession into two very different futures. The people processing transactions and preparing standard returns are in a genuinely different position from the people advising clients, interpreting complex regulations, and making strategic judgments. This guide tells you which side of that divide you are on, what the data actually shows about job security, and what to do about it now.

Table of Contents

  1. What AI Is Already Doing in Finance and Accounting
  2. The Finance Jobs That Are Genuinely Going
  3. The Finance Jobs That Are Safe
  4. The Profession Is Splitting in Two
  5. How the Big Four and Major Firms Are Using AI
  6. What AI Cannot Do in Accounting
  7. How to Future-Proof Your Finance Career
  8. The Realistic Timeline
  9. Frequently Asked Questions

What AI Is Already Doing in Finance and Accounting

The shift is already well underway. According to the 2025 Wolters Kluwer Future Ready Accountant report, 77% of firms plan to increase their AI investment and 35% are already using AI tools daily. The profession has passed the experimentation phase and entered the integration phase — which means the question is no longer whether AI will change accounting, but how far along that change already is.

Where AI is already doing the work: Optical character recognition processes invoices automatically, matching them against purchase orders and flagging discrepancies without human intervention. Bank reconciliation that used to occupy a bookkeeper for hours runs in seconds. Payroll calculations, tax return preparation for standard cases, and financial report generation are largely automated in firms that have invested in modern platforms. Tools like QuickBooks AI, Xero, and enterprise ERP systems handle the transaction processing that defined entry-level finance work for decades. The 2025 Intuit survey found that 93% of accountants are already using AI to support client advisory services — not as a future plan, but as current practice.

The adoption is being driven by economics as much as capability. When AI handles transaction processing reliably and quickly, the cost argument for keeping humans on that work is hard to sustain. Firms that have automated routine processing are reinvesting the time savings into higher-margin advisory work — not out of altruism about staff development, but because advisory work is where clients pay more and where relationships are stickiest.

The talent paradox: Despite all the automation anxiety, the accounting job market is tighter than the headlines suggest. The unemployment rate for accountants and auditors was just 2.0% in 2025, well below the national average. The Bureau of Labor Statistics projects 5% employment growth for accountants and auditors through 2032. Robert Half's 2026 research found that 61% of finance and accounting hiring managers say it is harder to find skilled professionals than a year ago. The market is in transition — but that transition is creating scarcity in skilled roles, not surplus.

The Finance Jobs That Are Genuinely Going

Certain finance and accounting roles are facing structural decline, and being honest about which ones matters more than offering false reassurance. The common thread running through all of them is the same: they are built primarily on volume processing of structured data — exactly what AI does faster, cheaper, and with fewer errors than humans.

Accounts Payable and Receivable Clerks

This is the role with the highest automation risk in finance — estimated at 84% by current analyses. Invoice processing, payment matching, and ledger updates have been automated at scale by OCR and AI integration platforms. Large organisations that used to employ teams of AP clerks now run the same volume through software with minimal human oversight. The humans who remain are there for exceptions, disputes, and vendor relationships — a small fraction of the original headcount.

Basic Bookkeepers

Routine bookkeeping — recording transactions, reconciling accounts, producing standard month-end reports — is one of the most automated functions in finance. Cloud accounting platforms with AI categorisation have made it possible for a small business owner to handle their own bookkeeping, or for a single bookkeeper to manage a client load that would previously have required a team. The market for basic bookkeeping services has contracted significantly and will continue to do so.

Payroll Administrators

End-to-end payroll processing — calculating pay, managing deductions, handling benefits enrolment, producing payslips — is now largely automated. Platforms like ADP, Workday, and modern HR systems process payroll with minimal human input for standard situations. The human role has shifted toward managing exceptions, handling employee queries, and ensuring the rules the system follows are correctly configured.

Junior Financial Analysts (Data Processing Functions)

The portion of a junior analyst's job that involves pulling data, building standard reports, and populating dashboards is being automated. AI produces financial summaries, variance analyses, and trend reports from underlying data faster and more consistently than a junior analyst working in spreadsheets. The analytical judgment layer — what does this mean, what should we do about it — remains human. The data processing layer is not.

The timeline matters: These roles are not all disappearing simultaneously. Immediate pressure (2025–2026) applies to data entry, basic bookkeeping, and standard payroll. Junior analyst data processing faces significant compression in the 2027–2029 window. Mid-level analysis is in the 2030–2035 horizon. Knowing where your specific role sits on that timeline is more useful than generic anxiety about automation.

The Finance Jobs That Are Safe

Roles built on professional judgment, client trust, regulatory accountability, and the interpretation of complexity are not just surviving — they are becoming more valuable as the routine work around them is automated away.

Finance roles with strong long-term protection

  • CFOs and senior finance leaders — Strategic financial decision-making, stakeholder management, and accountability for organisational outcomes require human judgment at a level AI cannot replicate.
  • Tax advisors (complex planning) — Optimising across multiple entities and jurisdictions, interpreting evolving legislation, and managing grey areas requires experienced professional judgment that earns premium fees precisely because it cannot be automated.
  • Forensic accountants — Investigating fraud, tracing funds through complex structures, and providing expert witness testimony requires human investigation skills and accountability that AI cannot provide.
  • Financial advisors and wealth managers — The relationship built on years of understanding a client's circumstances, risk tolerance, and life goals is what human advisors provide. Robo-advisors handle low-cost index management. Everything else is the human advisor's domain.
  • Auditors (complex engagements) — Professional judgment in evaluating management estimates, assessing misstatement risk, and exercising scepticism carries legal accountability that AI cannot hold.
  • AI and technology finance specialists — AI governance accounting, digital asset valuation, and technology CFO advisory are growth roles that did not exist five years ago and are in high demand.

Finance roles under the most pressure

  • Accounts payable and receivable clerks — 84% automation risk
  • Basic bookkeepers — core tasks now largely automated
  • Payroll administrators — handled by modern HR platforms
  • Data entry and transaction processing roles
  • Junior analysts focused on report generation and data pulling
  • Standard tax preparation (simple personal and business returns)

The Profession Is Splitting in Two

The most important thing to understand about AI and accounting is not that jobs are being lost — it is that the profession is bifurcating into two very different types of work, with very different futures.

Finance Role Type Automation Risk Direction of Travel What It Requires
Transaction processing, data entry, standard reporting Very High (75–85%) Declining headcount, reduced pay Attention to detail, system knowledge
Standard tax preparation (simple returns) High (60–75%) Consumer software taking market share Tax knowledge, software proficiency
Junior analyst (data-processing focus) Moderate-High (40–60%) Role being redesigned around AI tools Analytical judgment, tool fluency
Management accounting and FP&A Moderate (25–40%) Augmented by AI, not replaced Business judgment, communication
Complex tax planning and advisory Low (15–25%) Growing demand, premium fees Expertise, client relationships
Forensic accounting and investigation Very Low (<15%) Stable, AI as tool not replacement Investigative judgment, legal knowledge
CFO and senior strategic finance Very Low (<10%) Growing complexity and importance Leadership, strategy, accountability

The split in plain language: If your finance career is primarily about processing information accurately, AI will do it better. If it is primarily about interpreting information wisely, building relationships, exercising accountable judgment, and advising people through complex decisions, AI makes you more productive but cannot replace you. The profession is sorting into these two categories faster than most people's career plans have adjusted for.

How the Big Four and Major Firms Are Using AI

PwC has invested over a billion dollars in AI capabilities. KPMG's AI-powered audit platform now analyses entire transaction populations rather than samples — a fundamental change from traditional audit methodology that improves coverage while reducing manual testing time. EY has deployed AI for document analysis and contract review. Deloitte uses AI across financial modelling, due diligence support, and regulatory analysis.

What the Big Four are doing with the time saved: The consistent pattern is reinvestment rather than headcount reduction. When AI handles processing, experienced professionals spend more time on client work — which is higher-margin and stickier. Audit sampling gives way to full-population testing. Tax compliance gives way to proactive planning conversations. The firms are not smaller — they are doing different work with the same people. None of the Big Four has reduced its professional headcount as a result of AI adoption.

Mid-size and smaller accounting firms are following a similar trajectory with one important difference: AI is enabling them to compete for work that previously required Big Four scale. A two-partner firm with strong AI tools can now deliver depth of analysis that would previously have required a much larger team. This is democratising the market — and eroding the headcount-based competitive advantage that larger firms have historically relied on.

What AI Cannot Do in Accounting

  1. Exercise professional accountability — A CPA can sign an audit opinion, represent clients before the IRS, and take personal professional responsibility for their work. These legal authorities require a licensed professional. AI can analyse the data behind an audit but only a human can sign the opinion and bear the consequences if it is wrong.
  2. Interpret regulatory ambiguity — Tax law and accounting standards are full of grey areas. When a rule is unclear or novel business arrangements do not fit existing categories, the question is how the rule applies — and that requires trained professional judgment, not pattern matching on historical data. This is where the most valuable accounting work has always lived.
  3. Build the client relationship over time — A CFO or senior tax partner who has advised a client through multiple business cycles, knows the ownership dynamics, and understands the subtle risk tolerances of the management team is doing something software cannot replicate. That accumulated trust is the foundation of long-term client relationships.
  4. Navigate genuinely novel situations — When a client faces an unprecedented transaction structure, a new tax authority position, or a novel regulatory interpretation, the accountant reasons from first principles in uncharted territory. AI models trained on historical data are least reliable exactly where experienced professionals are most valuable.
  5. Have the difficult conversations — Telling a client their planned transaction will not achieve the intended tax outcome, or that their financial statements require a qualified audit opinion, or that their business model has a structural problem — these conversations require the interpersonal skill and trusted relationship that only human advisors build.

How to Future-Proof Your Finance Career

The single most important shift: The accountants thriving in 2026 have moved from being data processors to being data interpreters. AI handles the processing. Human value is in the judgment that turns processed data into useful advice. Every career decision should be evaluated against this shift — does this move me toward interpretation and judgment, or does it keep me in processing?

  1. Master the AI tools in your specific area — Being fluent in the AI tools relevant to your work makes you more productive and more valuable. An accountant who can use AI to deliver deeper analysis faster is more competitive than one who avoids the tools. Know QuickBooks AI, Xero, your firm's analytics platform, and whatever tools are standard in your practice area.
  2. Shift deliberately toward advisory work — If your current role is heavily weighted toward processing and reporting, seek the advisory components. Volunteer for client meetings, take on work that requires you to form and communicate a view. The profession is rewarding advisory work with higher salaries and more job security than processing work.
  3. Develop specialisms in new complexity — AI regulation and governance accounting, cryptocurrency and digital asset treatment, ESG reporting standards, international transfer pricing, and R&D tax credits are areas where rules are complex, evolving rapidly, and requiring significant professional interpretation. Early specialists in emerging areas have always commanded premium positions.
  4. Protect and leverage your credentials — A CPA or equivalent carries legal authority that AI cannot hold. The credential matters more, not less, as AI automates routine work — because what distinguishes a credentialled professional from software is precisely the accountability, regulatory authority, and professional judgment the credential represents.
  5. Build client relationships intentionally — The relationship between a trusted financial advisor and their client is the most durable source of career security in the profession. Clients who trust you as a person, not just as a service provider, will not replace you with software.

For broader context on how AI is reshaping professional roles across industries, see our guides on what jobs AI will replace, why AI hasn't taken your job yet, and our guide on AI job losses in HR — a profession facing a very similar split between routine and strategic work.

The Realistic Timeline

  1. Now — 2027 (Already happening): Data entry, basic bookkeeping, AP processing, and standard payroll are substantially automated in modern organisations. The market for these roles has contracted and will not recover. Standard personal tax returns are being handled by consumer software at scale. Junior analyst data-pulling and report generation are heavily AI-augmented.
  2. 2027–2030 (Accelerating): Compliance monitoring, credit processing, and junior analyst roles face significant redesign. Mid-size firms that have not invested in AI begin losing clients to those that have. The bifurcation between processing-focused and advisory-focused roles becomes impossible to ignore in compensation data.
  3. 2030 and beyond (Settled picture): The profession is structurally smaller in processing headcount and larger in advisory and specialist headcount. AI handles the vast majority of structured data processing. Human professionals focus almost entirely on judgment, relationship, and accountability functions — and are paid accordingly.

Frequently Asked Questions

Will AI replace accountants?

Not as a profession. The BLS projects 5% employment growth for accountants and auditors through 2032, and the profession's unemployment rate was just 2% in 2025. What AI is replacing is the routine, high-volume processing work that characterised entry-level accounting. Judgment-intensive advisory, complex tax planning, audit, and client-facing work is as in demand as ever — and in some cases becoming more valuable as the routine work around it is automated.

Which accounting roles are most at risk from AI?

Accounts payable and receivable clerks face around 84% automation risk — invoice processing, payment matching, and ledger updates are now handled automatically by modern platforms. Basic bookkeepers, payroll administrators, and junior analysts in data-processing roles face significant structural pressure. Standard tax preparation for simple personal and business returns is also being automated by consumer platforms.

Is accounting still a good career choice in 2026?

Yes — with an important qualification. Accounting built around advisory work, complex judgment, and professional accountability has strong growth prospects. Accounting built around processing transactions and producing routine reports faces structural headwinds. The career path toward advisory and specialist work is the one with a strong future, and the CPA credential and professional relationships remain genuinely valuable assets on that path.

How are the Big Four using AI?

PwC has invested over a billion dollars in AI capabilities. KPMG's AI audit platform now analyses entire transaction populations rather than samples. EY and Deloitte have deployed AI across document analysis, financial modelling, and regulatory research. The consistent pattern is reinvestment of time saved into higher-margin advisory and complex technical work — not headcount reduction. None of the Big Four has reduced its professional headcount as a result of AI adoption.

Can AI prepare tax returns?

Yes for standard situations. Consumer platforms like TurboTax effectively handle straightforward personal returns, and business accounting software handles routine business filings with minimal human input. Complex tax planning — optimising across multiple entities and jurisdictions, managing regulatory ambiguity, advising on novel transactions — requires experienced professional judgment. The market for human tax professionals is shifting from preparation toward planning.

What skills should accountants develop to stay relevant?

AI tool fluency in their specific practice area, advisory and communication skills, specialism in areas of new or complex regulatory change (AI governance accounting, digital asset treatment, ESG reporting), relationship-building capabilities, and ongoing professional development to maintain credentials carrying legal authority. The accountants thriving in 2026 have moved from being data processors to being data interpreters — and that is the direction every finance career should be heading.

Is the CPA qualification still worth getting?

Yes — more than ever in some respects. CPAs can sign audit opinions, represent clients before the IRS, and take professional responsibility for their work — legal authorities AI cannot hold. As AI handles more routine accounting work, the credential increasingly marks out the professionals providing the judgment, accountability, and advisory value that software cannot. It is a baseline qualification for serious accounting careers, not a guarantee of advancement on its own.

How much of an accountant's job can AI automate?

McKinsey estimates 22% of a typical accountant's job can be automated with current AI, with 44% technically automatable. The most automatable tasks — data entry, reconciliation, standard report generation — are already substantially automated in organisations with modern platforms. The less automatable tasks — client advisory, complex judgment, regulatory interpretation, professional accountability — are where the profession is concentrating its value and its headcount growth.

The Future of AI and Lawyers: Is robo-litigation here?

Will AI Render Lawyers Obsolete? What about Legal Profession?

AI is already doing legal work that partners billed at $500 an hour five years ago. Document review that used to keep junior associates occupied for three days now takes twenty minutes. Legal research that required a trained researcher to dig through databases for hours is handled in seconds. And yet attorney headcount at the top 100 US law firms grew by nearly 8% in 2024 — the opposite of what you would expect if AI were eliminating legal jobs. The story of AI and lawyers is more interesting, and more nuanced, than either the fear or the hype suggests. This guide explains what is actually happening, who should be concerned, and what smart lawyers and law students should do about it.

Table of Contents

  1. What AI Is Actually Doing in Law Firms Right Now
  2. The Roles That Are Genuinely Under Pressure
  3. The Roles That Are Safest
  4. What AI Cannot Do in Law
  5. The Ethics and Liability Questions Every Lawyer Needs to Understand
  6. How Law Firms Are Using AI Right Now
  7. What Law Students and Junior Lawyers Should Do
  8. The Realistic Timeline to 2030
  9. Frequently Asked Questions

What AI Is Actually Doing in Law Firms Right Now

The shift in legal AI adoption has been remarkable even by the standards of a technology landscape defined by rapid change. In 2024, around one in four legal professionals was using AI tools for work. By 2026, that figure had risen to nearly seven in ten — a more than doubling of adoption in a single year that the legal technology industry described as unprecedented for a profession that historically embraced new tools with the enthusiasm of a cat approaching a bathtub.

Lawyers are not adopting AI because it is fashionable. They are adopting it because it saves time and, in a profession where time is billed by the hour, that translates directly into money. A lawyer who saves 240 hours a year through AI assistance can take on 15 to 20 percent more client work without working longer hours. That is a compelling proposition regardless of how you feel about the technology.

Contract review and due diligence

This is where AI has made the most visible impact on legal work. Tools like Harvey AI, Kira Systems, and Luminance can review hundreds of contracts simultaneously, flagging unusual clauses, identifying missing provisions, and summarising key terms at a pace no human team could match. In a major M&A transaction where due diligence might involve reviewing thousands of documents across multiple data rooms, AI has compressed what used to be weeks of associate time into days. The work still requires a lawyer to review the output and apply professional judgment — but the volume of raw review work has collapsed.

Legal research

Westlaw Precision, LexisNexis Protégé, and Harvey AI have transformed legal research. A question that would have taken a junior associate several hours of database searching can now be answered in minutes. Thomson Reuters has built agentic AI workflows into its platforms that can execute multi-step research tasks autonomously. The quality still needs human verification — more on that shortly — but the time required has been cut dramatically.

Document drafting

AI drafts standard legal documents — non-disclosure agreements, employment contracts, demand letters, routine court filings — competently and quickly. For documents that a lawyer has drafted hundreds of times before, AI produces a solid first draft in seconds that the lawyer then refines. This is not replacing legal drafting skill; it is eliminating the blank-page problem for documents where the structure and language are largely standard.

The access to justice angle

One consequence of AI lowering the cost of basic legal tasks is that legal help is becoming accessible to people who previously could not afford it. Simple wills, standard lease agreements, basic employment contracts, and routine immigration paperwork are now within reach for individuals and small businesses that faced significant cost barriers before. This is one of the genuinely positive developments in legal AI — the profession has long had an access problem, and AI is beginning to address it.

The Roles That Are Genuinely Under Pressure

Honesty requires acknowledging where the pressure is real, even in a profession where overall employment is growing. The Bureau of Labor Statistics projects continued growth in legal employment overall — but that aggregate picture masks significant variation at the role level.

Junior associates doing document review

First and second-year associates at large law firms have historically spent a significant portion of their time on document review in litigation matters. This work is now largely AI-handled. The implications for how large law firms recruit, train, and develop junior lawyers are significant. The traditional path of learning through high-volume routine work is being disrupted, and firms are still working out what replaces it.

Paralegals and legal researchers

Roles whose primary function is conducting research, summarising documents, or managing straightforward transactional paperwork face genuine pressure. McKinsey estimates that 22% of a lawyer's job can be automated with currently available AI, and 44% of legal tasks are technically automatable. For support roles where that 44% represents the core of the job rather than a minority of it, the structural pressure is real.

The billable hour model under pressure

Even for lawyers whose jobs are not directly at risk, AI is creating pressure on the billing model itself. When a task that used to take ten hours takes one, clients increasingly ask why they should be charged for ten. The Wolters Kluwer 2026 Future Ready Lawyer Report describes an emerging "80/20 reversal" — a shift from lawyers spending 80% of their time on routine work to spending 80% on high-value strategic advice. That reversal is coming whether firms plan for it or not.

The hallucination problem in legal AI: Stanford research found error rates of 17% for Lexis+ AI and 34% for Westlaw's AI-assisted research tools. Courts have documented over 700 cases worldwide involving AI hallucinations in legal filings, with sanctions ranging from warnings to significant monetary penalties. The rate reached four or five new documented cases per day by late 2025. This is not a theoretical risk — it is a documented professional liability hazard that every lawyer using AI tools needs to take seriously.

The Roles That Are Safest

Most resilient legal roles

  • Trial lawyers and litigators — Courtroom advocacy requires reading a room, adjusting in real time, building credibility with a jury, and exercising contextual judgment that AI cannot replicate. Complex litigation is growing, not shrinking.
  • Criminal defence lawyers — Representing a person facing criminal consequences requires a human relationship of trust that is irreducibly personal.
  • Family lawyers — Divorce, custody, and family matters are among the most emotionally complex legal situations people face. The interpersonal skill required is not automatable.
  • Senior deal lawyers and negotiators — Reading rooms, building relationships, and applying judgment built over decades to complex transactions is something AI assists but cannot replace.
  • Regulatory and compliance specialists — AI regulation, data privacy law, and ESG compliance are creating entirely new practice areas that require human judgment to navigate. These are growth areas.

Roles facing the most change

  • Junior associates doing routine document review and research
  • Paralegals focused on document processing and standard research
  • Legal transcriptionists (largely automated)
  • Routine conveyancing and standard transaction work
  • Basic contract drafting and review for standard document types

What AI Cannot Do in Law

AI cannot exercise judgment in genuinely ambiguous situations. Law is full of them. The question is not just what the rule says but how it applies to a specific set of facts that no rule was designed to address, in a jurisdiction with a particular judicial culture, for a client with particular risk tolerance and commercial objectives. This kind of judgment — combining legal knowledge, contextual understanding, and wisdom built from experience — is precisely what makes a senior lawyer valuable, and it is precisely what AI cannot replicate.

AI cannot build the kind of client trust that sustains a legal relationship over time. A client facing a significant legal problem is not just looking for correct information. They are looking for someone they trust to guide them through something difficult. That trust is built through human interaction, consistent judgment, and demonstrated care for the client's interests. As Harvard Law's Center on the Legal Profession notes, demand for lawyers is growing precisely because the world is becoming more legally complex — and that complexity requires human navigation, not just information retrieval.

AI cannot take professional responsibility. A lawyer is personally liable for their work product and owes duties to clients and courts that cannot be delegated to a machine. When an AI system produces a hallucinated case citation in a court filing, it is the lawyer who faces sanctions. This professional accountability structure is one of the most important reasons AI will continue to be a tool for lawyers rather than a replacement for them.

The Ethics and Liability Questions Every Lawyer Needs to Understand

The American Bar Association's Formal Opinion 512, issued in July 2024, established the baseline ethical framework for AI use in legal practice. It requires lawyers to have "reasonable understanding" of the AI tools they use — their capabilities, limitations, and the ways they can fail. This is a professional responsibility obligation, not optional guidance.

What this means in practice: a lawyer cannot rely on AI output without applying independent professional judgment to verify it. Submitting an AI-generated brief containing fabricated citations — which has happened in documented cases resulting in sanctions — is a professional misconduct issue regardless of whether the lawyer knew the citations were fabricated. The duty of competence requires knowing your tools well enough to identify when they have failed you.

The disclosure question: Dozens of federal and state judges have issued standing orders requiring disclosure when AI is used in preparing court filings. As of early 2026, 741 AI-related bills had been introduced across 30 US states — an unprecedented level of legislative activity creating a complex and rapidly evolving compliance landscape. Keeping up with these developments is itself becoming a specialist legal practice area, with clients needing lawyers who understand the rules before the rules are fully written.

How Law Firms Are Using AI Right Now

Large international firms — Allen & Overy, Clifford Chance, Linklaters, Latham & Watkins — have invested heavily in proprietary AI tools and partnerships with legal AI companies. Allen & Overy's partnership with Harvey AI is one of the most cited examples: the firm has integrated AI into contract analysis and research workflows across multiple practice groups and jurisdictions. These firms are using AI to maintain competitive advantage and manage client cost pressure — not to reduce headcount, at least not yet. Harvard Law's research found that none of the Am Law 100 firms it surveyed anticipated reducing practising attorney headcount despite reporting productivity gains of up to 100 times on specific tasks.

Mid-size and smaller firms are where the disruption may ultimately be most significant. AI is enabling smaller practices to access research, drafting, and analysis tools that previously required large associate teams. A two-person firm with good AI tools can now compete for work that previously required a team of ten. This is genuinely democratising the legal market.

Corporate legal departments are adopting AI faster than their outside counsel. The ACC/Everlaw survey found that 64% of in-house legal teams now expect to rely less on outside counsel directly because of AI capabilities they are building internally. Law firms that cannot demonstrate AI capability and transparency risk losing work to competitors who can.

What Law Students and Junior Lawyers Should Do

  1. Learn the tools, seriously — At least eight US law schools have now integrated mandatory AI education into their core programmes. Harvard Law School's "AI and the Law" programme provides hands-on learning with current tools. If your school does not offer this yet, seek it out independently. The observation that has become standard in legal career advice is accurate: AI will not make lawyers obsolete, but lawyers who do not use AI will be made obsolete by those who do.
  2. Do not build your career on high-volume routine work — The training model built around years of document review is being disrupted. Junior lawyers need to actively seek higher-complexity work earlier — client-facing matters, complex analytical questions, and anything requiring genuine judgment rather than mechanical processing.
  3. Build client relationships from day one — The client relationship is the most durable source of value in legal practice and the one thing AI cannot replicate. Lawyers who become the trusted adviser rather than the competent technician are the ones whose careers will be most resilient.
  4. Develop specialisms in new legal complexity — AI regulation, data privacy, algorithmic accountability, and ESG compliance are creating entirely new practice areas. These are growth areas precisely because they involve novel, rapidly evolving complexity that requires human expertise. Being an early specialist in an emerging area of law has always been one of the best career strategies.
  5. Sharpen the human skills — Empathy, communication, advocacy, and the ability to navigate difficult human situations are not soft skills in legal practice. They are the core of what a lawyer provides that AI cannot. These are worth investing in deliberately, not treating as secondary to technical legal knowledge.

The Realistic Timeline to 2030

The legal profession does not change quickly. It is conservative by nature, heavily regulated, and built around professional relationships that take years to establish. That is both a reason why AI adoption has been slower than in some other industries and a reason why the changes that are coming will take longer to fully play out.

In the near term, AI tools will become standard infrastructure in most law firms — the way email and document management systems did before them. The ABA's shift from debating whether to use AI to establishing how to use it responsibly reflects a profession that has largely accepted the technology and is now focused on governance. Firms and practitioners treating AI literacy as a competitive advantage today will have built meaningful leads by the time it becomes table stakes.

In the medium term, the billing model will evolve more significantly than the profession is currently acknowledging publicly. When AI compresses the time required for tasks that were previously billed by the hour, value-based pricing will become a practical necessity for many types of work. This will restructure firm economics even as overall demand for legal services continues to grow.

By 2030, the legal profession will look recognisably different in its use of technology and somewhat different in its economics — but it will still be a profession where humans are indispensable, because the work that matters most in law has always been about judgment, relationships, and accountability. None of those are going anywhere.

For broader context on how AI is reshaping professional careers across industries, see our guides on what jobs AI will replace, why AI hasn't taken your job yet, and our earlier overview of how AI is transforming the legal profession.

Frequently Asked Questions

Will AI replace lawyers?

Not as a profession — the employment data is clear on this. Attorney headcount at top US law firms grew nearly 8% in 2024. Law school graduate employment hit a record high. Harvard Law's research found that none of the Am Law 100 firms it surveyed planned to reduce practising attorney headcount despite significant AI productivity gains. What AI replaces is specific routine tasks within legal roles — document review, standard research, mechanical drafting. The legal work requiring genuine judgment, client trust, and professional accountability is as human as ever.

Is it ethical for lawyers to use AI?

Yes — and ABA Formal Opinion 512 has established the framework for doing so responsibly. Lawyers must have reasonable understanding of AI tools and must independently verify AI output before relying on it. Using AI to assist legal work is permitted and increasingly expected. The failure is not in using AI — it is in relying on unverified AI output or submitting AI-generated errors to courts or clients without checking them. The duty of competence applies to AI tools just as it does to any other tool.

Which legal specialties are safest from AI?

Trial and courtroom advocacy, criminal defence, family law, complex deal negotiations, and emerging regulatory areas including AI law, data privacy, and ESG compliance are most resilient. These require human judgment, emotional intelligence, and professional accountability that AI cannot replicate. The specialties under most structural pressure are those built primarily on high-volume repetitive document work — document review, standard research, routine drafting — where AI performs the core tasks reliably and quickly.

What AI tools are lawyers actually using?

The most widely deployed legal-specific AI tools in 2026 are Harvey AI (contract analysis, research, drafting — used by Allen & Overy and major firms), Westlaw Precision and LexisNexis Protégé (AI-enhanced research), Kira Systems and Luminance (contract review and due diligence), and Thomson Reuters' CoCounsel (agentic document review and research workflows). General-purpose tools like ChatGPT and Claude are also widely used, though legal-specific tools trained on legal data are generally more appropriate for formal legal work.

What happens when AI gets a legal citation wrong?

The lawyer who submitted the filing faces the consequences — not the AI vendor. Courts have issued sanctions in documented cases, from formal warnings to significant monetary penalties. Stanford research found error rates of 17% and 34% for major legal AI research tools, meaning AI-generated research always requires independent verification. The duty of competence requires that lawyers understand their tools well enough to identify when they have produced incorrect output — which in legal research means checking that cited cases exist, say what you claim, and have not been overturned.

Should I still go to law school given AI?

Yes — the employment and salary data strongly supports this. Graduate employment is at a record high. Demand for legal services is growing partly because AI is creating new legal complexity. The strategic point is to approach legal education with AI in mind: develop AI literacy, focus on judgment-intensive practice, and seek emerging specialisms in AI regulation, data privacy, and technology compliance. Lawyers who plan around routine high-volume work face uncertainty. Those who plan around judgment, advocacy, and client relationships have strong prospects.

Is AI creating new legal jobs?

Yes, significantly. AI regulation, data privacy law, algorithmic accountability, and technology compliance are creating entirely new practice areas growing rapidly. The increasing use of AI in consequential decisions — hiring, lending, healthcare — is generating litigation and regulatory work that did not exist before. Legal technology consulting and AI governance are areas of growing demand. The legal profession has consistently created new specialisms as the economy changes, and AI is no exception.

How is AI changing the cost of legal services?

Putting downward pressure on routine legal task costs and making basic legal help accessible to more people and businesses. For standard documents, straightforward research, and routine transactions, AI has significantly compressed time and cost. For complex, judgment-intensive work — major litigation, significant transactions, novel regulatory questions — cost pressure is less acute because clients pay for expertise and accountability, not just time spent. The legal market is bifurcating: cheaper for routine work, still premium for work requiring senior human judgment.

The Future of Drones and AI: Delivery, Warfare, Agriculture, and the Industry Reshaping the World

The Future of Drones and AI: What Is Actually Happening and Where It Is All Going

A drone delivered your neighbour's parcel last week. Another drone spotted a crop disease before it spread across an entire field. And somewhere on a battlefield, an autonomous flying weapon made a targeting decision faster than any human could. Drones powered by AI are no longer a technology of the future — they are embedded in daily life, agriculture, infrastructure, and warfare right now. This guide explains what is actually happening across each of these areas, what the genuine benefits are, and what the risks are that most coverage glosses over.

Table of Contents

  1. Where Drones and AI Actually Are in 2026
  2. How AI Changes What a Drone Can Do
  3. Drone Delivery: What Is Real and What Is Still Coming
  4. Military Drones and the Uncomfortable Questions
  5. How Drones Are Quietly Transforming Farming
  6. Drones in Everyday Life: Inspection, Safety, and More
  7. The Risks That Deserve More Attention
  8. What the Next Decade Looks Like
  9. Frequently Asked Questions

Where Drones and AI Actually Are in 2026

The easiest way to understand where drone technology stands today is to separate what already works from what is still being figured out. Both categories are larger than most people realise.

What already works: drone delivery in specific cities and suburban areas, autonomous agricultural spraying across large commercial farms, infrastructure inspection of pipelines and power lines, military surveillance and precision strike in active conflict zones, and emergency supply delivery to hard-to-reach areas. These are not pilots or proofs of concept — they are operational systems doing real work every day.

What is still being worked out: drone delivery at full national scale (the regulatory framework is the bottleneck, not the technology), reliable autonomous operation in dense urban environments with unpredictable airspace, the ethical and legal frameworks for autonomous weapons, and managing the privacy implications of pervasive aerial surveillance at scale.

The scale of the shift: The drone industry as a whole is expected to roughly double in value over the next seven years. The AI-specific layer — the intelligence that makes drones genuinely autonomous — is growing even faster, at more than three times the pace of the broader market. The military segment remains the largest, but commercial and agricultural segments are growing fastest. Every major industry that operates at scale outdoors is now actively deploying or evaluating AI drone systems.

How AI Changes What a Drone Can Do

The difference between a drone without AI and a drone with it is not a matter of degree. It is a fundamental change in what the machine is capable of.

A traditional drone does exactly what a human operator tells it to do. It flies in the direction you point it, hovers when you tell it to hover, and lands when you land it. Without a human hand on the controls, it does nothing useful. A drone with AI can take off, navigate to a destination it has never visited before, avoid unexpected obstacles, complete a task, and return home — all without anyone touching a remote control.

The capabilities that make this possible have all matured rapidly in recent years. Computer vision lets drones see and understand their environment in real time — identifying what they are looking at, whether that is a structural crack in a bridge, a diseased section of crops, or a moving vehicle on a highway. Autonomous navigation lets drones plan routes dynamically, adapting when something unexpected appears in their path. And swarm intelligence lets multiple drones coordinate with each other, splitting up tasks and adjusting collectively when conditions change — the way a colony of ants organises itself without any single ant directing the whole operation.

What "edge AI" means for drones: One of the most important recent developments is the ability to run AI processing on the drone itself rather than relying on a connection to a remote server. This matters because drones often operate in environments with poor connectivity — inside buildings, underground, in conflict zones where communications are jammed. A drone that can think for itself, on board, without needing a signal, is a fundamentally more capable and robust tool.

Drone Delivery: What Is Real and What Is Still Coming

Drone delivery is the application most people have heard about, and it generates more hype and more scepticism than almost any other use case. Both reactions are partly justified.

What is genuinely real: Amazon, Wing (Google's drone subsidiary), Zipline, and Walmart are all operating commercial drone delivery services in specific US cities and internationally right now. Wing has completed hundreds of thousands of deliveries. Zipline — which started by delivering blood supplies to remote hospitals in Rwanda — now delivers consumer orders in suburban US neighbourhoods in under ten minutes. These are not tests. They are services you can actually use.

What the sceptics are right about: drone delivery is still a niche service, not a mass-market one. Most drones can only carry a few kilograms, which rules out the majority of things people order online. They work well in suburban areas with gardens or driveways but struggle in dense urban environments where landing safely is genuinely hard. And in most countries, flying a drone beyond the operator's line of sight still requires special regulatory approval — which means the seamless city-wide drone delivery network of popular imagination is still waiting on governments to act, not on engineers.

The real bottleneck: Drone delivery technology has been ready for broader deployment for several years. The thing holding it back is not battery life or navigation software — it is the regulatory framework for flying unmanned aircraft at scale in shared airspace. When regulators establish clear nationwide rules for beyond-visual-line-of-sight operations, drone delivery will expand very quickly. The technology is waiting for the paperwork.

What this means for delivery jobs

Drone delivery will create genuine job pressure in one specific category: light parcel, short-distance delivery in suburban areas. For heavier items, longer distances, and urban environments with complex access requirements, ground delivery will remain dominant for the foreseeable future. The picture is not as simple as "drones replace delivery workers" — it is more like "drones take the lightest, shortest, most repetitive runs while humans handle everything else." For the bigger picture on logistics automation, see our guide on the future of self-driving trucks.

Military Drones and the Uncomfortable Questions

No honest account of AI drones can avoid this topic. The way armed drones with AI are being used in active conflicts is changing warfare in ways that are outpacing the international laws and ethical frameworks designed to govern it.

What Ukraine changed

The conflict in Ukraine has been a real-world test of what cheap, mass-produced autonomous drones can do on a modern battlefield. Ukraine manufactured and deployed millions of small FPV attack drones — fast, cheap to produce, and increasingly capable of operating with minimal human guidance. The cost arithmetic of these weapons is radically different from conventional precision munitions, and every military in the world has noticed. You can produce hundreds of AI-guided drones for the cost of a single traditional guided missile.

The implications go beyond Ukraine. When effective attack drones cost a few hundred dollars to manufacture, the barrier to drone warfare is no longer money or industrial capacity. Any sufficiently motivated actor — state or non-state — can field meaningful drone capabilities. This changes the security calculations for every country and raises serious questions about how existing weapons treaties and laws of war apply to a class of weapon that did not exist when those frameworks were written.

The human-in-the-loop question: Current military doctrine in the US and NATO requires a human being to make the decision to use lethal force — even if a drone identifies and tracks a target autonomously, a person must authorise the strike. But drone swarm operations happen at speeds where maintaining meaningful human oversight of each individual action is becoming practically impossible. The pressure toward systems that act faster than human decision-making is real, and the international legal and ethical frameworks to govern that are not keeping pace. This is one of the most consequential unresolved questions in contemporary security policy.

The programmes to watch

The US military's Replicator initiative is explicitly designed to field large numbers of cheap, capable autonomous drones faster than adversaries can counter them. Shield AI has developed software that lets drones navigate in GPS-denied environments without any communication link to a human operator — and in late 2025 unveiled a drone designed to fly alongside crewed fighter jets under AI direction. China has integrated advanced AI for coordinated autonomous drone swarm operations at military scale. The competition between these programmes is one of the defining technology races of this decade.

How Drones Are Quietly Transforming Farming

Agriculture is where AI drones are probably making the most quietly significant impact — and it gets far less attention than delivery or military applications because farming does not trend on social media.

The core application is simple to describe but meaningful in practice. Drones equipped with specialised cameras can detect differences in how plants reflect light that are invisible to the human eye. These differences reveal which plants are stressed, diseased, under-watered, or pest-damaged — sometimes days before any visible symptoms appear. A farmer who used to walk fields looking for problems, or who sprayed entire fields as a precaution, can now get a precise map showing exactly where the problems are and treat only those areas.

The environmental implications are significant. When you only spray the 10% of your field that actually has a problem, you use 90% less chemical on that intervention. Over a full growing season across a large farm, the reduction in pesticide and fertiliser use is substantial — both for farm economics and for the surrounding environment.

Beyond crop monitoring, agricultural drones handle precision spraying at speeds no human could match, survey large properties for soil condition mapping, track livestock across extensive grazing areas, and provide the kind of timely data that makes the difference between catching a disease outbreak early and losing a significant portion of a harvest. This is the fastest-growing civilian application for AI drones, because it clearly works and clearly pays for itself.

Drones in Everyday Life: Inspection, Safety, and More

Keeping infrastructure safe

Inspecting a wind turbine blade, a long stretch of high-voltage power line, or the underside of a motorway bridge used to require either expensive specialist equipment, rope access workers in hazardous positions, or simply not doing it as often as you should. AI drones have changed this entirely. A drone with a high-resolution camera and thermal imaging can inspect kilometres of pipeline or hundreds of turbine blades in a day, flagging anomalies precisely enough that engineers can prioritise which ones actually need physical attention.

"Drone-in-a-box" systems — where a drone lives in a weatherproof housing at an inspection site, launches automatically on a schedule, completes its survey, and returns to recharge — are now operational at major industrial sites. The drone effectively becomes a piece of fixed infrastructure that happens to fly.

Emergency response

In search and rescue, the first few hours are critical and covering large areas quickly is the difference between finding a missing person in time and not. AI drones with thermal cameras can sweep large areas of terrain much faster than ground teams, detect heat signatures indicating a person, and relay the location in real time. In disaster zones, they provide aerial assessment before it is safe to send in ground teams, identify survivors in collapsed buildings, and in some cases deliver water or medical supplies to people who cannot be reached any other way.

The Risks That Deserve More Attention

Most coverage of drones focuses on capability. The risks tend to get less space. Here are the ones that matter most.

Where the genuine value is

  • Delivering medical supplies to places ground vehicles cannot reach
  • Reducing agricultural chemical use through precision application
  • Keeping workers out of dangerous inspection environments
  • Faster disaster response when every hour matters
  • Reducing military risk to human combatants

Where the risks are real

  • Autonomous weapons without accountability — When an AI makes a lethal decision, who is responsible? The law has not caught up with the technology, and the gap matters.
  • Surveillance at scale — Cheap drones with AI face recognition can monitor entire neighbourhoods continuously. The infrastructure for mass aerial surveillance is being built faster than the legal limits on using it.
  • Democratised attack capability — The same cheap drone technology available for agriculture and delivery can be modified for attack by anyone with motivation and a modest budget. This is not theoretical — it is happening.
  • Airspace management — As drone density increases in low-altitude airspace shared with helicopters and emergency vehicles, the risk of collision and complexity of management grows significantly.
  • Job displacement — Delivery workers, agricultural sprayers, and infrastructure inspection workers face genuine pressure from drone automation over the coming decade.

What the Next Decade Looks Like

The honest version of where drones are heading involves neither the utopian vision of drone highways delivering everything everywhere nor the dystopian one of skies permanently darkened by surveillance aircraft. Reality will be messier and more interesting than either.

In the near term, expect drone delivery to expand meaningfully in suburban areas as regulations evolve, agricultural drone adoption to accelerate across farms of all sizes, and military programmes to push further into autonomous operation with gradually weakening human oversight requirements. The anti-drone industry will grow in parallel, because every new capability creates a corresponding need for countermeasures.

In the medium term, the regulatory frameworks that have been the real bottleneck for commercial drone deployment will mature, creating space for much wider-scale operations. The economic case for autonomous delivery of lightweight goods will become strong enough that major logistics companies restructure their last-mile operations around it. And the ethical debates around autonomous weapons will become harder to avoid as the gap between capability and legal frameworks widens.

Further out, the questions that matter most are not technical — the technology will continue to improve regardless. They are about governance: what rules will societies set about how autonomous systems can use lethal force, how aerial surveillance data can be collected and used, and how the economic disruption of automation will be managed. These are fundamentally human questions, not engineering ones, and they are the most important drone-related conversations that are not yet happening at the scale they need to be.

For more on how AI is changing the way we work and live, see our guides on what jobs AI will replace, the future of self-driving trucks, and our beginner's guide to AI.

Frequently Asked Questions

Can I get a drone delivery right now?

Yes — in specific areas. Amazon Prime Air, Wing (Google's drone delivery service), Zipline, and Walmart's DroneUp partnership all operate real commercial delivery services in select US cities and internationally. The service is limited to certain locations and to items light enough to carry — typically under five kilograms. The reason it has not expanded faster is regulatory, not technical.

Do military drones operate without human control?

It depends on the system and context. Current US and NATO policy requires a human to authorise lethal force, even when a drone identifies and tracks a target on its own. But defensive systems that intercept incoming drones already operate fully autonomously because the timescales are too short for human decision-making. Swarm operations raise genuine questions about what meaningful human oversight looks like when action is happening faster than humans can review each decision.

How are drones actually used in farming?

The primary use is crop monitoring — flying over fields with specialised cameras that detect plant stress, disease, and pest damage before it is visible to the naked eye. This gives farmers precise information about where problems are rather than requiring blanket treatment of entire fields. Beyond monitoring, agricultural drones handle precision spraying, soil mapping, and livestock tracking. Treating only the affected part of a field, rather than the whole field, cuts costs and reduces environmental impact significantly.

What is a drone swarm?

A group of drones operating under collective AI coordination — communicating with each other, dividing tasks, and adapting together when conditions change. No single human directs each drone; the swarm behaves more like a colony than a fleet. Militarily, swarms are significant because they can overwhelm defences through numbers and coordinated behaviour. Commercially, swarm logic allows many drones to inspect a large structure or monitor a wide area simultaneously, sharing the work intelligently.

Are drones a privacy concern?

Yes, genuinely. AI drones can be equipped with cameras capable of identifying individuals from altitude and monitoring movements over time. The legal frameworks governing what aerial surveillance is permissible — who can deploy it, what data can be retained, who can access it — are significantly underdeveloped relative to what the technology can now do. This is an area where capability has clearly run ahead of governance.

What jobs are at risk from drone technology?

The most directly at risk are light-parcel last-mile delivery workers in suburban areas, agricultural crop sprayers, and infrastructure inspection workers. The displacement will happen gradually over a decade rather than suddenly, and it will be uneven — drones suit specific high-volume repetitive tasks but face real limitations in complex environments. For a broader look at automation and employment, see our guide on what jobs AI will replace.

Which country is most advanced in drone technology?

For military capability, the United States leads — operating the most advanced surveillance, strike, and autonomous systems. China leads in commercial drone manufacturing, with DJI holding a dominant share of the global consumer and commercial market. Israel is a significant exporter of military drone systems. Ukraine has developed remarkable attack drone capability under battlefield conditions in a short time. For the AI software that makes drones genuinely autonomous, US companies are currently at the frontier.

What is stopping wider drone deployment?

Primarily regulation. For commercial delivery and inspection, the technology is largely ready — the bottleneck is regulatory frameworks for flying unmanned aircraft at scale in shared airspace. For military applications, the constraints are ethical and legal: the frameworks governing autonomous weapons have not kept pace with capability. For agricultural use, the main remaining barriers are cost of entry for smaller farms and the training needed to support operations at scale.

Robot wars - what an operation in Ukraine tells us about the battlefield of the near future

Thursday, May 7, 2026

The Future of AI in Education

The Future of AI in Education: Will It Improve Test Scores, Do We Need Fewer Teachers, and Is It Actually Good for Students?

86% of students now use AI for schoolwork. Student AI use jumped from 66% in 2024 to 92% in 2025 — the biggest year-over-year rise ever recorded. The AI in education market hit $7.57 billion in 2025, up 46% from the previous year, and is projected to reach $112 billion by 2034. And yet 85% of teachers say they feel unprepared to manage AI in their classrooms, and 70% worry it is weakening students' critical thinking. The gap between how fast AI is entering education and how ready schools are to handle it is one of the defining challenges of 2026. This guide cuts through the hype to tell you what the research actually shows about AI's impact on learning — test scores, teacher jobs, and the genuine pros and cons that every student, teacher, and parent should understand.

Table of Contents

  1. Where AI in Education Actually Stands in 2026
  2. Does AI Actually Improve Test Scores? What the Research Says
  3. Will We Need Fewer Teachers?
  4. The Real Benefits of AI in Education
  5. The Real Problems with AI in Education
  6. What This Means for Students Right Now
  7. What This Means for Teachers Right Now
  8. What Parents Should Actually Do
  9. Frequently Asked Questions

Where AI in Education Actually Stands in 2026

AI in education is no longer experimental. It is the default reality in most classrooms and homes, whether schools have a policy for it or not.

The 2026 snapshot: 86% of educational organisations have embraced generative AI — the highest adoption rate across any industry. 83% of K–12 teachers use generative AI for lesson planning, feedback, and content. 82% of college students use AI, compared to 58% of high school students. ChatGPT leads with 66% student usage. The AI in education market is growing at 36% annually. And yet only 20% of universities have a formal AI policy, and 60% of educators and students report receiving zero AI training despite rapid adoption.

The three most common student uses are: research assistance (first), summarising information (38% of students), and generating study guides (33%). Notably, 63% of students say they use AI for less than half of their academic tasks — suggesting most are still using it as a supplement to their own thinking. For teachers, AI's biggest reported benefits are time savings: 81% say it saves time on administrative work, 80% on lesson preparation, and 79% on grading. The average teacher reclaims nearly six hours per week — time that can be redirected toward students who need the most support.

Does AI Actually Improve Test Scores? What the Research Says

The headline figures are striking, but they need context.

The strong positive evidence

A peer-reviewed randomised controlled trial published in Scientific Reports in June 2025 found that an AI tutor outperformed traditional in-class active learning with an effect size of 0.73–1.3 standard deviations. To put that in perspective, an effect size of 0.4 is considered meaningful in educational research — this is one of the strongest findings for any educational intervention in recent years. Students using an enhanced AI tutor achieved 127% improvement in target outcomes, compared to 48% with a standard AI chatbot. Khan Academy's Khanmigo produced a 1.4 grade-level improvement in pilot districts. Carnegie Learning's MATHia, used by over 1 million students, showed 42% improvement in learning outcomes. ALEKS showed 35% improvement in course completion for at-risk students.

Key statistics on AI and test scores: Students in AI-powered learning environments achieve 54% higher test scores than those in traditional settings. In schools using AI-driven maths apps, test scores increased by 19% within three semesters. University students using an AI chatbot scored approximately 10% higher on exams than non-users. Students with learning disabilities using AI speech assistants showed a 29% boost in reading fluency. Low-income students using subsidised AI tutoring apps increased maths scores by 22%. In higher education, AI-enhanced tutoring led to a 25% drop in course failure rates.

The important caveats

The University of Massachusetts Amherst found that structured AI use improved student engagement and confidence but did not raise exam scores in their study. Students with AI access spent less time on homework while maintaining similar grades — suggesting efficiency gains rather than performance improvements. And crucially, students relying heavily on standard AI chatbots performed measurably worse when the AI was removed — suggesting dependency rather than genuine learning in some cases.

The honest summary: AI tutoring tools specifically designed for learning — adaptive, feedback-rich, pedagogically structured — show genuinely strong evidence of improving outcomes. General-purpose AI chatbots used as homework tools show much more mixed results, with some evidence of dependency effects that may harm long-term learning.

The critical thinking finding: Multiple studies now show a negative correlation between AI tool usage and critical thinking scores — particularly for younger students. 70% of teachers worry that AI weakens critical thinking and research skills. This is not theoretical — it is emerging from the data. How AI is used matters enormously: AI as a scaffold for learning produces different outcomes from AI as a replacement for thinking.

Will We Need Fewer Teachers?

The honest answer is almost certainly no — at least not within any meaningful planning horizon. But the nature of teaching is changing, and that matters for anyone entering or already in the profession.

UNESCO and McKinsey both project that teacher demand will keep climbing through 2035, primarily because personalised AI-driven learning actually increases the need for skilled human guidance. In districts using AI-powered learning management systems, staffing levels have remained steady while student-support roles — mentors, interventionists, instructional coaches — have actually expanded. The Learning Policy Institute estimated that one in eight teaching positions in 2025 was either unfilled or filled by teachers not fully certified for their roles. This is a shortage crisis, not a surplus.

The Pew Research finding: 31% of AI experts — people whose work focuses specifically on AI — predicted that AI would lead to fewer teaching jobs over the next 20 years. This is a significant minority view, not a fringe one. But even these experts are largely talking about a 20-year horizon, not an imminent change. For career decisions in 2026, teaching remains one of the most stable, human-centred professions in an increasingly automated economy.

The composition of what teachers do will change significantly even if total numbers remain stable. Tasks AI handles well — content delivery, routine assessment, progress tracking, differentiated worksheet generation, report drafting — will occupy less time. Tasks AI cannot do — building relationships, navigating emotional complexity, managing classroom dynamics, modelling intellectual curiosity — will occupy more. Many experienced teachers say the administrative and content-generation burden is what drives burnout. If AI removes that burden, the job could become both more sustainable and more focused on what drew most people to teaching in the first place.

The Real Benefits of AI in Education

Where AI is genuinely helping

  • Personalised learning at scale — AI adapts content, pacing, and difficulty to each student in real time. A classroom of 30 can receive 30 different learning paths simultaneously.
  • Immediate, specific feedback — AI provides feedback within seconds rather than days. Faster feedback loops consistently improve retention.
  • Accessibility for students with disabilities — 29% reading fluency boost for students with learning disabilities. 71% of inclusive classrooms use AI for customising to individual education plans. One of AI's clearest, least contested benefits.
  • Equity and access — In refugee camps, AI helped 19,000 children gain basic literacy in under six months. Remote schools used AI tablets to raise attendance by 17%. AI provides specialist-quality tutoring to students who could never afford $70–$120/hour private tutors.
  • Teacher time reclaimed — 81% of teachers say AI saves time on admin, averaging six hours per week that can go to students who need the most support.
  • More active learning time — Students using AI tools spend 34% more time in active learning. AI revision tools reduced exam prep time by 22%, allowing better effort distribution.

The Real Problems with AI in Education

Where AI is creating genuine problems

  • Academic integrity crisis — Educators catching AI-related cheating rose from 53% to 61% in one year. 72% of educators fear AI will increase plagiarism. AI detection tools have high false positive rates, meaning honest students are being accused.
  • Critical thinking erosion — Studies show a negative correlation between AI tool usage and critical thinking scores, particularly for younger students who outsource thinking to AI.
  • Dependency effects — Students who relied heavily on AI chatbots performed measurably worse when the AI was removed. This is a learning dependency, not a learning gain.
  • The disconnection problem — Half of students report feeling disconnected from teachers when AI mediates their interactions. The student-teacher relationship is one of the strongest predictors of academic success.
  • Data privacy risks — 71% of educators cite data privacy and algorithmic bias as top concerns. Children's data deserves the highest protection standards — which current frameworks often do not yet provide.
  • The policy vacuum — Only 20% of universities have a formal AI policy. 85% of teachers feel unprepared. The technology has raced far ahead of institutional response.
  • Widening inequality — Access to high-quality AI tools is uneven. Without deliberate policy, AI risks amplifying existing educational inequalities.
AI ApplicationEvidenceKey riskVerdict
Structured AI tutoring (Khanmigo, MATHia)Strong — 42–127% learning gains in RCTsAccess equity✅ Strong positive evidence
General AI chatbots for homeworkMixed — some gains, dependency effectsCritical thinking erosion⚠️ Use with caution
AI for students with disabilitiesStrong — 29% reading fluency gainsData privacy✅ Clear benefit
AI for teacher admin and planningStrong — 6 hrs/week reclaimedOver-reliance✅ Clear benefit
AI for essay writing and assessmentWeak — integrity issues, unreliable detectionAcademic fraud, false accusations❌ Significant problems
Adaptive learning platformsModerate to strongReduced teacher relationship time✅ Positive with human oversight

What This Means for Students Right Now

  1. Use AI to understand, not to produce — Students who use AI to explain concepts, generate practice questions, and get feedback on their thinking benefit most. Those who use it to generate final outputs show dependency effects and perform worse without it.
  2. Know your institution's policy — Only 20% of universities have formal AI policies, but violations are taken seriously. Know the rules before using the tools.
  3. Develop AI literacy as a skill — Understanding how AI works, where it is unreliable, and how to critically evaluate its outputs is becoming as fundamental as information literacy. Students who can use AI effectively and critically will be more employable.
  4. Do not let AI replace the teacher relationship — The student-teacher relationship is one of the strongest predictors of academic success. AI can supplement it but should not substitute for it.

What This Means for Teachers Right Now

  1. Use AI for the tasks that drain you, not the tasks that define you — Administrative work, worksheet generation, progress report drafting, quiz creation — use AI here first and aggressively.
  2. Redesign assessments, do not just police AI use — Catching AI-assisted work is an arms race you cannot win. Design assessments requiring genuine engagement: oral defences, in-class work, process portfolios, novel problem types.
  3. Build your own AI literacy — 85% of teachers feel unprepared. The teachers who develop AI fluency now will be more effective and more professionally resilient.
  4. Focus on what AI cannot do — Relationship, mentorship, specific personal feedback from knowing a student over time, modelling intellectual curiosity — lean into these. They matter most for long-term student outcomes and are what AI cannot replicate.

What Parents Should Actually Do

  1. Ask your child's school what their AI policy is — If they do not have one, raise it as a concern. Schools without AI policies leave students and teachers to navigate it alone.
  2. Have direct conversations about how your child uses AI — Not to police it but to understand it. Is your child using AI to understand difficult concepts, or to complete homework without engaging with it?
  3. Do not assume AI use equals cheating — Using AI as a study tool, getting explanations, checking work — many uses are equivalent to using a calculator or dictionary. Context and intent matter.
  4. Advocate for equity in AI access — The benefits of high-quality AI tutoring are substantial and unequally distributed. Advocate for school-wide access to evidence-based AI learning tools.

For more context on how AI is changing education, careers, and the broader workforce, see our guides on AI in education, top free AI tools in 2026, and what jobs AI will replace.

US Department of Education: Artificial Intelligence and the Future of Teaching

How AI could radically change schools by 2050

Frequently Asked Questions

Does AI actually improve test scores?

The evidence is genuinely strong for specifically designed AI tutoring tools, and more mixed for general chatbots. A peer-reviewed RCT published in Scientific Reports in June 2025 found AI tutoring outperformed traditional learning with effect sizes of 0.73–1.3 standard deviations — significantly above the 0.4 threshold considered meaningful in educational research. Students in AI-powered environments achieve 54% higher test scores on average. However, students relying heavily on general chatbots show dependency effects and perform worse when AI is removed.

Will AI replace teachers?

No — not in any timeframe relevant for current career decisions. UNESCO, McKinsey, and OECD all project rising teacher demand through 2035. Districts using AI have maintained staffing while expanding support roles. One in eight teaching positions is already unfilled — AI is more likely to help address this gap than create a surplus. The 31% of AI experts who predict fewer teaching jobs are largely talking about a 20-year horizon, not an imminent change.

Is using AI for schoolwork cheating?

It depends on how it is used and your institution's policy. Using AI to explain concepts or get feedback on your thinking is generally acceptable and educationally beneficial. Using AI to generate work you submit as your own violates academic integrity at virtually every institution. The honest test: if you are using AI to avoid engaging with the material rather than to deepen your engagement with it, it is probably crossing the line.

Does AI help students with learning disabilities?

Yes — this is one of AI's clearest benefits. Students with learning disabilities using AI speech assistants showed a 29% boost in reading fluency in 2025. 71% of inclusive classrooms use AI for customising to individual education plans. AI provides the kind of differentiated, patient, infinitely repeatable instruction that human teachers cannot sustainably provide at individual scale.

Is AI making students worse at critical thinking?

There is emerging evidence it can — particularly when students use AI to bypass thinking rather than support it. Multiple studies show a negative correlation between AI tool usage and critical thinking scores. 70% of teachers report concern about this. Skills that are not practised do not develop — students who outsource analysis and synthesis to AI may be efficient short-term and academically weaker long-term.

What AI tools are proven to help students learn?

Khan Academy's Khanmigo (1.4 grade-level improvement), Carnegie Learning's MATHia (42% improvement across 1M+ students), and ALEKS (35% improvement for at-risk students) have the strongest evidence. The 2025 RCT in Scientific Reports found enhanced AI tutors with pedagogical design dramatically outperformed both standard chatbots and traditional instruction.

Should schools ban AI or embrace it?

Evidence strongly suggests blanket bans are both ineffective and counterproductive. 86% of students already use AI — prohibition drives use underground and removes the opportunity to teach responsible use. The best outcomes come from clear policies defining acceptable use, assessment redesign requiring genuine engagement, investment in teacher AI literacy, and proactive adoption of evidence-based AI learning tools.

How is AI changing what teachers do?

AI is most significantly changing the administrative and content-generation burden. 81% of teachers say AI saves time on admin, 80% on lesson preparation, and 79% on grading — reclaiming an average of six hours per week. This time can go toward individual student coaching, relationship-building, and intervention for struggling students — the high-value human work that most teachers entered the profession to do.

What Is a Hallucination in AI?