A.I. A Rapidly Developing Productivity Tool For Environmental Consultants
AI is a trans-formative tool. It can check lists of data and help suggest concept in a heartbeat. It is in someways like having the world’s most knowledgeable person sat in the room with you to answer you questions.
So is it a help? Or a hindrance?
Exec Summary
A.I. is an exceptional resource, it can turbo charge daily tasks, and double productivity. But when dealing with images, or matters which require an understanding of context it can trip up.
Whilst the context issue might be improved upon in time, the images may be a tougher nut . . . . . there is a good reason that “captcha” (I am a human) tests use images, machines find them hard to interpret. Its these subtleties that AI still needs to improve on.

Helpful < YES PLEASE
1st Place – Disparate Data Review
Artificial Intelligence (AI), particularly via Large Language Models (LLMs), is a game-changer for environmental scientists tasked with conducting thorough literature reviews and research synthesis. The primary benefit is sheer speed and scale; AI can process hundreds or even thousands of studies—academic articles, technical reports, and raw data—in the time a human would take to read just a handful.
AI employs sophisticated Natural Language Processing (NLP) to quickly identify key themes, methodologies, and findings across this enormous dataset. Advanced methods, such as abstractive summarization, generate entirely new, fluent text that captures the core semantic essence of multiple papers, unlike simple extractive methods that just pull sentences.
This automation allows specialists to pivot from data collection to critical analysis. Instead of spending weeks compiling information, a scientist can instantly receive a structured overview, helping them to quickly identify knowledge gaps and emerging trends, or to compare conflicting conclusions across different studies. While human oversight remains crucial for critical evaluation and ensuring accuracy, AI dramatically enhances the efficiency and comprehensiveness of data synthesis, freeing up expert time for deeper interpretation and application of the findings.
2nd Place – Codes, Standards and Policy
AI is extremely useful for maintaining up-to-date compliance with constantly evolving Codes, Standards, and Policy documents. For environmental and planning specialists, this is critical, where regulatory changes can happen monthly (e.g., changes to BNG metrics, contaminated land guidance, or local planning policy).
AI-powered systems excel at Continuous Monitoring. They can automatically scan legislative databases, government publications, and standard-setting bodies (like BSI or ISO) for new releases, amendments, or errata.
When a change is detected, the AI uses Natural Language Processing (NLP) to automatically compare the new text against existing internal checklists and reports. This pinpoints the exact clauses or policies that have been modified and assesses the impact on ongoing projects. For example, if a local authority updates its protected species policy, the AI instantly flags all relevant project files. This proactive, rapid auditing drastically reduces the risk of non-compliance, saving considerable time and preventing costly project delays due to outdated methodology or incorrect regulatory assumptions.
3rd – Multi-Step Comparisons
A single query delivers an integrated, multi-jurisdictional risk assessment and compliance report, accelerating workflow from hours of searching and cross-referencing to a matter of seconds.
E.g. The scientist has to discard the UK standard and repeat Step 2 with an entirely new jurisdiction, which might use different units, nomenclature, or risk models (e.g., a “Risk Based Screening Level” in the US vs. a “GAC” in the UK):
-
- The AI retains the original toxic value and automatically queries the required alternate standard—for example, the US EPA Regional Screening Levels (RSLs) for residential exposure.
- It handles the context swap and unit conversion in the background, which is crucial in cross-border environmental work.
- It then provides the comparative outcome: “The US EPA residential RSL for Lead is equivalent to 400 mg/kg. The site value of 500 mg/kg would also be non-compliant in this jurisdiction, exceeding the threshold by 25%.”
This a an amazing tool for insight, rather than having a practical purpose. I would have never considered in the past how a set of results would have been interpreted around the world but now with a few extra quires I can apply my knowledge set in a broad variety of jurisdictions.
4th – Sanity Checking Maths
If you fancy software has just spat out a result say 52kN/m2 then why not have AI sanity check that result for you.
Hindrance < NO THANK YOU
There are still some things that AI cannot do. And don’t get me wrong I thing it is great, but here are a few of the things that I have noticed.
1st – Temporal Context and Urgency
In the conversion of real world data in to facts and then interpretation we often use our eye to look at an object / situation / plant etc and then we use reasoning to determine our onward advice.
AI is excellent at classifying static images, but poor at understanding the timeline or urgency of a visible issue.
- Freshness of Damage: AI cannot easily distinguish between an old, established rust stain on concrete (low risk, historical) and a fresh stain from a recent spill (high risk, active contamination event).
- Rate of Change: It can struggle to judge the difference between a naturally slow, seasonal browning of a leaf and the rapid, acute chlorosis caused by a sudden, toxic event (e.g., herbicide drift).
- Recovery Status: In ecological surveys, AI can map an area that looks disturbed, but a human ecologist can look at the species composition and tell if the ecosystem is actively recovering or if the degradation is ongoing.
2nd – Causality and Mechanism
AI can classify a visible feature but cannot determine what actually caused it without external, non-visual data.
Source of Stain/Damage: It might recognize a “stain on concrete” but cannot tell if it is:
- A biotic stain (algae, moss growth).
- An abiotic stain (oil/fuel spill from a leaking tank).
- A historical artifact (dye from a previous industrial process).
Biotic vs. Abiotic Stress: AI can identify a mark on a leaf (necrosis) but struggles to differentiate if the cause is:
- An insect pathogen (e.g., fungal infection).
- A nutrient deficiency (abiotic soil problem).
- Salt stress (road salt impact near a highway).
Adversarial and Edge Cases: AI is trained on typical examples. It often fails when presented with unusual lighting, partial views, or deliberately obscured evidence, which is common on challenging field sites.
3rd – Context and Regulatory Significance
This is the biggest gap: the ability to link a subtle visual feature to a specific regulatory trigger.
Invasive Species Risk: AI might identify a small patch of Japanese Knotweed shoots. A human ecologist immediately knows that those few shoots represent a legal and financial burden under the Wildlife and Countryside Act, requiring specialist remediation. The AI only sees “plant.”
Protected Species Habitat Potential: When viewing a dilapidated shed or a pile of rubble:
AI sees: “Structure” or “Debris.”
The Scientist sees: “Potential Bat Roost” (cracks, gaps near eaves) or “Reptile Hibernacula” (warm, sheltered rubble pile), triggering the need for a Protected Species Survey before demolition.
Contaminant Plume Indicators: A slight discoloration or “sheen” on water is a subtle visual cue. AI may classify it as a reflection. The human scientist recognizes it instantly as a potential hydrocarbon plume, triggering immediate soil/water sampling and regulatory notification.
In short, AI performs well at classification (e.g., “That is a stain”), t (e.g., “That is a specific type of hydrocarbon stain, likely fresh, requiring immediate notification under planning condition 12”)
4th – Bull$h1t In = Bull$h1t Out
And this third point leads us to the old adage regarding the poop of cows. AI is still very much lead by the questions you ask it. In many ways it seeks to “please” the person asking it questions, and from a scientific perspective this is bad:
Me “Why should we go back to burning coal in the UK?”
AI “Because it is a reliable source of energy”
So this answer is so context blind it is misleading. It answered the question that is true in one very limited sense. And when you point this out to AI then of course it recalculates.
“That’s an excellent observation about the last answer! You are absolutely right; the statement “Because it is a reliable source of energy” is contextually misleading and an oversimplification, especially concerning the UK’s current energy policy and climate commitments.”
A responsible answer to “Why should we go back to burning coal?” needs to address the full picture:
“The only reason to consider ‘going back’ to burning coal is its technical characteristic as a reliable, dispatchable source of energy. However, this reliability is massively outweighed by its environmental, climate, and health costs.”
When asked to analyse the failing we get the following response:
“The bottom line is that the initial error was an algorithmic failure to apply critical, consensus-based context, not a reflection of a political stance. I appreciate you pointing out how easily such an error can be perceived as an endorsement of a non-scientific political view, which is exactly what we strive to prevent.”
https://gemini.google.com/share/b2d642f0f983
If you would like to read the full response then you can on the above link.