AI in commercial aviation—what matters right now for travellers and aviation enthusiasts, and what will regulators certify before 2035?
(Read November 10, 2025 update lower)
Moreover, why should passengers care about software they never see? Additionally, which airline and manufacturer programmes actually use machine learning (ML) and computer vision rather than plain optimisation? Finally, how will authorities approve new functions without compromising the industry’s safety record?
Notably, AI has moved from slide decks into day‑to‑day airline operations. Airlines fly with AI route advisors that propose better tracks before crews push back and while they are en route. Maintenance teams turn free‑text logbooks into patterns with natural‑language processing (NLP) so repeat defects surface faster. Engine makers tie predictive digital twins to live telemetry so shop visits and parts swaps fall when evidence supports them, not when a calendar insists. Meanwhile, airframers are testing computer‑vision assistance fordiversion, landing, and taxi, always with the crew in the loop. Crucially, regulators now publish roadmaps, process standards, and safety patterns. These documents show how learning‑enabled systems can earn a place on certificated aircraft. (EASA AI Roadmap 2.0; FAA AI Safety‑Assurance Roadmap; EUROCAE/SAE ED‑324).
This feature is about AI only. It leaves out deterministic avionics features and conventional optimisation unless there is a clear ML or vision component. It also avoids speculative autonomy timelines. Instead, it focuses on what is flying now, what has credible, dated milestones, and how those capabilities will be assured.
AI in commercial aviation – The 2025 snapshot: AI that already touches your flight
Travellers increasingly benefit from AI without noticing. Dispatch centres feed AI route advisors with weather, winds, traffic, and airspace constraints; the system proposes routes that shorten time‑in‑air and cut fuel burn. Alaska Airlines publicly credits this approach—delivered by Airspace Intelligence’s Flyways platform—with saving over 1.2 million gallons of fuel in 2023 alone, while presenting optimisation opportunities on a large share of flights. — Alaska Airlines Newsroom.
Additionally, AI is personalising the hardest part of most flights: the climb. SITA OptiClimb uses machine‑learning to build a tail‑specific performance model for each aircraft. The model recommends climb speeds and acceleration altitudes for the day’s conditions, typically producing several percent fuel savings in the most energy‑intensive phase. SITA explicitly describes OptiClimb as a machine‑learning‑fed system that updates with post‑flight data to keep the performance model current (SITA OptiClimb). In September 2025, Air India Group adopted OptiFlight and eWAS across its A320 and 737 fleets, projecting 11,100 tonnes of fuel and 35,000 tonnes of CO₂ saved per year (Air India).
And the pilots…
Meanwhile, pilots themselves interact with AI‑assisted analytics that respect privacy and offer coaching rather than punishment. GE Aerospace FlightPulse grew to 60,000 pilot users across 42 airlines by October 2025, illustrating demand for data‑driven technique insights delivered in a professional, non‑punitive design. — GE Aerospace. While FlightPulse is not marketed strictly as “AI,” it often integrates predictive analytics and can host AI‑derived advisories from airline partners; the key point for readers is that pilot apps now close the loop between dispatch, aircraft sensors, and human judgement.
Maintenance
On the ground, NLP for maintenance has become a workhorse. Lufthansa Technik added Technical Repetitives Examination to its AVIATAR platform in February 2025. The tool uses AI to parse free‑text logbooks—“coffee machine,” “coffee brewer,” and “espresso maker” all map to the same ATA chapter—and surfaces repetitive defects earlier. The capability is live across more than twenty airlines and reflects a growing trend: let software do the boring reading so human engineers can do the thinking (Lufthansa Technik; coverage: Aviation Week).

Engines tell a similar story. Rolls‑Royce’s IntelligentEngine strategy links digital twins to live engine data so operators can predict health, protect time‑on‑wing, and plan shop visits around evidence rather than estimates (Rolls‑Royce). Pratt & Whitney packages comparable capabilities as EngineWise Intelligence, integrating ADEM analytics with full‑flight telemetry to detect anomalies and recommend action before small issues become big ones (Pratt & Whitney). For travellers, the promise is fewer cancellations and delays caused by avoidable surprises; for fleets, the payoff is higher reliability with less guesswork.
Even on the flight deck, computer‑vision assistance is moving from lab to runway. In January 2023, Airbus UpNext flew the DragonFly demonstrator on an A350‑1000 to test automated diversion support, vision‑aided landing, and taxi assistance in realistic airport conditions at Toulouse (Airbus). These are assistive functions, not replacements for pilots, and they point to what the next decade will add cautiously and credibly.
Your flight simply arrives a little earlier, burns a little less fuel, and recovers a little more gracefully when weather turns awkward. For airline teams, though, these tools mark a cultural shift: AI proposes; professionals decide.
The next five years: where AI will scale between now and 2030
Between 2025 and 2030, the arc is clear: advice everywhere, bounded by assurance, and supported by better data rights.
Advice everywhere. Route advisors will become always‑on copilots for dispatch—proposing lateral and vertical changes that reflect live winds, convective weather, military airspace activation, and traffic flow. Tail‑specific climb models will spread beyond early adopters. Descent and approach coaching will join them through perception of turbulence and runway‑condition cues. Crew‑facing apps will keep improving, but the term “app” will fade as functionality integrates into an airline’s operational backbone and an aircraft’s connected avionics.
Assurance by design. Regulators are ushering in low‑risk AI roles first. The European Union Aviation Safety Agency (EASA) characterises early uses as Level 1 (“assistance to human”) and Level 2 (“human‑AI teaming”) and emphasises learning assurance, data governance, and human factors in its AI Roadmap 2.0 (EASA AI Roadmap). In the United States, the Federal Aviation Administration (FAA)’s AI Safety‑Assurance Roadmap explains what today’s rules cover and where new policy or advisory material is needed. It states a simple principle: “Treat AI as a tool, not a human.” (FAA).
A common rulebook. Industry standards bodies EUROCAE and SAE International opened consultation on ED‑324, a process standard for approving aeronautical products that implement AI (EUROCAE ED‑324). Early issues focus on supervised, non‑adaptive ML to establish a certifiable baseline; later issues are expected to cover broader cases. This matters because operators and manufacturers need a repeatable path to approval, not one‑off negotiations.
Data rights that work. AI’s value flows through contracts, not just code. In October 2024, the International Air Transport Association (IATA) released Aircraft Operational Data principles that emphasise consent, transparency, and controlled sharing across airlines, original equipment manufacturers (OEMs), and maintenance providers (IATA AOD principles). Expect airlines to hard‑wire those expectations into vendor deals so new AI functions can use the data they need without months of renegotiation.
For the travellers
For the average traveller, the visible effects will be modest but steady: fewer last‑minute reroutes, quieter approaches, fewer “return to the gate” announcements. The interesting part will be how those wins are achieved: through per‑tail models, fusion of multiple data sources, and governed updates that keep learning components within a safety cage.
2030 to 2035: bounded autonomy and the safety cage
The next decade will not produce pilotless airliners. It will, however, produce bounded autonomy—narrow AI roles wrapped inside a runtime‑assurance architecture. In that design, a high‑assurance monitor oversees any ML component and intervenes if behaviour deviates from a safe envelope. The result is AI that can help, paired with a guardian that can take control when necessary. NASA has published guidance on this concept, widely called runtime assurance (RTA), for complex aeronautical systems (NASA RTA guidance).
What kinds of functions fit that pattern? Think vision‑aided taxi with hazard recognition, runway incursion alerts that blend sensors and external data, and diversion assistants that synthesise weather, terrain, and route rules faster than a human can scroll. All of them keep the crew in charge, all of them can present their reasoning in a human‑centred way that crews trust, all of them can be switched off or overruled when judgement says so.
The key is provable safety. Certification dossiers will need to include clear descriptions of model purpose, training data sources, known limitations, and in‑service monitoring plans. In other words, AI will not float through the system as a black box. It will arrive with documentation that engineers can read and regulators can challenge—and with on‑aircraft protections that keep operations predictable even when ML gets something wrong.
Route‑of‑flight AI: the quiet revolution at dispatch
Route selection is where AI has delivered the fastest wins. Airlines have always adjusted tracks for weather and winds; the novelty is the scale and speed at which AI proposes better options across the network. Modern advisors ingest 4D weather, convective predictions, wind fields, traffic flows, and airspace restrictions, then propose operationally realistic alternatives that dispatch teams can accept or adapt. When the system pays for itself on fuel alone—and when it also shaves minutes on block time—it tends to spread quickly.
Alaska Airlines is the cleanest public case: 1.2 million gallons of fuel saved in 2023 via AI‑assisted route proposals, and a steady flow of day‑of‑operation opportunities to improve trajectory choices (Alaska Airlines). Other carriers run similar systems under non‑disclosure, because in airline economics a few percent can be a moat. The technical lesson for readers is simple: prediction beats reaction. If software can see the likely congestion or turbulence twenty minutes earlier than a human can, it can suggest a plan while there is still room to manoeuvre.
This class of tool blends supervised learning for route efficiency prediction with constrained search that respects aircraft performance, ATC rules, and company policies. The interesting advances are in feature engineering for weather and traffic, and in human‑machine interfaces that show why a suggestion is good so a dispatcher can accept it with confidence.
Climb optimisation: tail‑specific learning that passengers feel
Take‑off and climb dominate a short flight’s fuel burn and still matter on long sectors. That is why SITA OptiClimb generates a predictive model per tail number. Each aircraft accumulates flight data; the model learns how that airframe performs across temperatures, weights, and altitudes; the tool then suggests schedules—speeds and acceleration altitudes—that balance time‑to‑top‑of‑climb with fuel use (SITA OptiClimb). The approach is data‑honest: the recommendations reflect the aircraft you are sitting in, not a brochure value. That is why operators report several percent savings in the climb phase—quiet in dollar terms per flight, large in fleet terms per year.
Travellers will not notice the settings, but they might notice smoother profiles and fewer speed changes early in the flight. Interesting is the learning loop—post‑flight data updates the model so it stays current no matter how the fleet ages or where it flies next.
AI in maintenance: logbooks, twins, and time‑on‑wing
Airline reliability lives or dies on information quality. Logbooks are rich information, but they are also messy. Humans write them in the heat of operations; terms vary; abbreviations multiply. AI helps by grouping like with like. If five crews describe the same cabin fault five different ways, NLP can still detect a pattern. That is what Lufthansa Technik productised with Technical Repetitives Examination on AVIATAR: it pulls free‑text, maps it to the correct ATA structures, and surfaces repeats earlier (Lufthansa Technik). The pay‑off is simple: fewer delays caused by issues that “keep coming back.”
On engines, predictive twins shift the maintenance conversation from “what interval?” to “what evidence?”. Rolls‑Royce IntelligentEngine and Pratt & Whitney EngineWise Intelligence consume rich telemetry and maintenance histories, then forecast health, flag anomalies, and optimise shop visits (Rolls‑Royce; Pratt & Whitney). That logic extends to parts pooling and inventory: if a fleet’s APU starter motors are trending poorly in a climate regime, AI can alert procurement and planners before an Aircraft on Ground (AOG) crisis develops.
In both cases the technical workhorse is pattern recognition on high‑dimensional data—text for logbooks; time‑series for engines—tempered by domain constraints. The trick is not to drown in false positives. Good systems grade their own confidence and show why they think an item is a repeat or an anomaly.
Cockpit assistance: computer vision with the crew in charge
If a traveller sees “AI” on the flight deck anywhere, it will be in assistive roles. Airbus UpNext’s DragonFly is the example to watch. In 2023, the demonstrator combined computer vision, sensor fusion, and decision logic to support emergency diversion, landing, and taxi on an A350‑1000 at Toulouse (Airbus). Taxiways and runways are especially ripe for help: visibility drops, signage is complex at unfamiliar airports, and runway incursion risk is not trivial. A system that recognises markings, reads surface condition cues, and cross‑checks with airport databases can warn crews before a hazard becomes a headline.
Elsewhere in the ecosystem, companies like Daedalean, Merlin, Reliable Robotics, and Xwing are pushing perception and supervised autonomy stacks for different aircraft categories and missions (Daedalean; Merlin Labs; Reliable Robotics; Xwing). Not all of those programmes target large passenger jets, but the safety arguments and assurance building blocks they refine will spill into air transport over time.
The hard part is less about pixel accuracy than human‑machine interface. Advice must be timely, explainable, and interruptible. Crews must control which cues they see and when; they must also know what will happen if they ignore or overrule a suggestion. That is a design problem as much as a model problem.
AI in design and manufacturing: lighter parts, fewer escapes, steadier line rate
AI is not only in operations; it also shapes the aircraft you fly. In design, generative methods and ML surrogates help engineers explore wider design spaces and find lighter, stronger parts faster than brute‑force simulation would allow. A good illustration is Airbus’s long‑running collaboration with Autodesk. Their “bionic partition” work used generative algorithms plus additive manufacturing to produce a cabin divider roughly 45–50% lighter while meeting safety requirements (Autodesk Research; Airbus). The point is not the single part; it is the workflow: let intelligent software propose many options, screen them with ML surrogates trained on physics, and push the best into high‑fidelity analysis and testing.
On the shop floor, computer‑vision quality gates and digital twins support a steadier line rate. Vision systems check fasteners, sealant, and composite plies in real time, which reduces defect escape and rework. Airbus and Accenture describe “visual intelligence” for assembly verification and step tracking, designed to compress station cycle time without compromising evidence collection (Accenture × Airbus). In parallel, Acubed (Airbus’s Silicon Valley innovation centre) built AVIA—Advanced Visual Intelligence for Assembly, an AI that recognises parts and validates work using real and synthetic imagery so the approach scales across stations (Acubed – AVIA).
Factory‑level digital twins—photo‑real, physically grounded simulations of lines and logistics—make it possible to trial staffing changes, robot placements, and material flows before cutting metal. NVIDIA showcases this in its industrial twin case studies: model the cell, inject realistic variability, spot the bottleneck, and fix it in software (NVIDIA). Purists may argue a simulation is not “AI.” What matters here is the feedback loop: twin insights guide where to apply AI (for example, which fastener checks to automate), and AI improves the twin by providing better data and behaviour models.
Travellers will not see any of this, yet they will feel it when deliveries arrive more predictably and new cabins roll out faster. Enthusiasts can track the discipline behind the scenes: traceability from requirement to model to part, and evidence trails that make certification less painful during ramp‑ups.
Safety assurance: how AI earns a place on certificated aircraft
Aviation remains aviation: safety is the product. That is why authorities are writing down how AI can be trusted.
EASA’s AI Roadmap 2.0 lays out stepwise entry for ML into safety‑related roles, with Level 1 (assistance) and Level 2 (human‑AI teaming) as the near‑term focus. The roadmap stresses data quality, explainability, and human‑centred design, along with a plan for rulemaking that may eventually produce Part‑AI materials tailored to learning systems (EASA).
EUROCAE/SAE’s draft ED‑324 complements that with a process standard for development and certification of aeronautical products that implement AI (EUROCAE ED‑324). The first issue targets supervised, non‑adaptive ML so industry can establish a repeatable baseline; later issues are expected to address other learning modes. Importantly, a common standard helps EASA and FAA accept similar evidence—crucial for global fleets.
The FAA’s AI Safety‑Assurance Roadmap explains gaps in existing guidance, explores how AI differs from legacy software, and points to advisory material that applicants can begin using today (FAA Roadmap). Its most memorable line—“Treat AI as a tool, not a human.”—is more than a slogan. It is a boundary: no personification, no assumption that “the AI knows,” and no delegation of responsibility. The machine must be bounded, monitored, and auditable.
Technically, the industry’s bridge is runtime assurance. The NASA pattern proposes that any ML element be paired with a deterministic safety monitor that enforces constraints and can take over if behaviour strays (NASA RTA). That allows innovation inside a fence: learning components can help, but the outcome remains predictable, testable, and controllable—exactly what certification demands.
Expect approval packages to include model cards (what the model is for), dataset statements (where the data came from and how bias was addressed), validation and verification plans tailored to ML, and in‑service monitoring procedures. For everyone else: that means trust is earned with evidence, not claimed by marketing.
AI in commercial aviation – Data rights, cybersecurity, and skills: the foundations that decide who wins
AI advantage accumulates where data rights are clear, pipelines are secure and auditable, and people understand both the technology and the operation.
Data rights. Airlines should ensure contracts allow operational data to be analysed by chosen partners under clear privacy and competition rules. The IATA AOD principles offer a widely accepted baseline for consent, transparency, and controlled sharing (IATA AOD). Without this clarity, every new AI idea stalls at legal review.
Cybersecurity and provenance. Learning systems are only as safe as their supply chain. That means versioned datasets, model registries, reproducible training, and tamper‑evident logs that connect a model in service to a traceable training run. Aviation already treats configuration control as sacred; AI needs the same discipline.
Skills and adoption. Tools succeed when pilots, maintainers, and controllers see value without fear. Privacy‑respecting feedback apps like FlightPulse grew because they coach, not punish (GE Aerospace). Airlines that invest in training and change management will harvest the most benefit; those that deploy dashboards without purpose will sow distrust.
The under‑appreciated hard problem is human factors for AI—how the system explains itself at the right moment, to the right person, in the right way.
AI in commercial aviation: What this means for travellers
Passengers rarely ask which algorithm helped pick the route. They ask whether the flight is on time, smooth, and safe. AI’s promise is to make more flights feel uneventful—a small miracle if you think about the complexity involved. Expect shorter taxi‑out times, less time in holding patterns, fewer last‑minute gate returns, and more consistent arrival performance. Also expect slightly quieter descents as idle‑path approaches become more common. And expect fewer cancellations caused by defects that could have been predicted or by maintenance that could have been scheduled earlier.
Will AI change what you see in the cabin? Incrementally. Airlines will continue to experiment with crew support tools that smooth service and with operational dashboards that coordinate resources during disruptions. You will mostly notice calmer operations rather than flashy gadgets.
What this means for aviation enthusiasts and professionals
If you follow the industry closely, watch the assurance stack rather than just the next app. The decisive moves are:
- Runtime assurance patterns moving from research papers into certification artefacts and test procedures.
- ED‑324 maturing into accepted process discipline across EASA and FAA projects.
- Data‑rights clauses that specifically mention model portability, dataset access, and in‑service monitoring data.
- Tail‑specific models becoming normal across fleets, not just flagship programmes.
- Computer‑vision aids that move from demonstrators like DragonFly into line‑fit or service bulletins with operational credit.
Related reading on Fliegerfaust: Britain’s 2030 air‑taxi target; Airbus explores foldable wings; NAV CANADA tower tech and FAA ATC overhaul.
The through‑line is simple: bounded AI that improves outcomes without adding surprises.
Regional, cargo, and business jets: why their AI matters to airliners
This feature focuses on commercial airliners, but innovation often cross‑pollinates from other segments. Cargo operators lead on route optimisation and asset utilisation, because tight margins reward even small gains. Regional fleets deliver more cycles per day, which makes climb and approach learning disproportionately valuable. Business aviation testbeds trial connected flight decks and perception aids that can later scale to transport‑category aircraft. None of these segments dictate what happens on an A320 or 737, but they send strong signals about what will work, what crews will accept, and what regulators will allow for AI in commercial aviation.
The human factor: explainable aids, not inscrutable oracles
Aviation’s best safety feature is the professional judgement of crews, dispatchers, engineers, and controllers. AI earns a role only when it augments that judgement. That is why explainability and interaction design matter so much. If a route advisor suggests deviating twenty miles right for a net saving of four minutes and 300 kilograms of fuel, it must show why: expected turbulence decay, wind shift, traffic gap. If a taxi aide warns of a runway incursion risk, it must present clear cues and unambiguous guidance, not a guessing game.
Software should also know when to be quiet. Flooding professionals with alerts or constantly interrogating them for confirmations can erode trust. The best systems behave like good colleagues: speak up when it helps; stay out of the way when it doesn’t.
Myths and realities: clearing the fog around “AI will fly the plane”
It won’t—not as a general claim, not soon, and not without a pilot. What you will see is assistive autonomy in narrow roles under runtime assurance, combined with human‑centred interfaces and strict certification. That is not a disappointment; it is aviation doing what it does best: moving fast where it is safe to do so and moving deliberately where it must. The industry’s safety record demands nothing less.
Another myth: “AI will replace maintenance teams.” It won’t. It will replace hours of reading with minutes of deciding. Engineers will still decide whether a trend is actionable, which fix to apply, and how to plan the next check.
A third myth: “AI is a black box.” It must not be. As standards mature, the documentation and monitoring that accompany learning systems will be part of the safety case. If a model cannot be explained, verified, and monitored, it should not be on the aircraft or in the operation.
AI in commercial aviation – A quick field guide: how to tell real aviation AI from marketing
Look for these signals:
- Specific, bounded problem. “Climb settings on tail XYZ in 30°C at 1,500 ft elevation” beats “AI optimises flights.”
- Tail‑number granularity. Per‑aircraft models signal real learning rather than generic optimisation.
- Evidence of learning. Mentions of post‑flight updates, training data, or confidence measures suggest substance.
- Assurance language. References to runtime assurance, learning assurance, or ED‑324/EASA/FAA roadmaps indicate awareness of certification realities.
- Human‑in‑the‑loop. Tools that feed crew, dispatch, or engineering decisions—not replace them—fit the near‑term path.
Be wary of these:
- Vague “AI‑powered” claims without a problem definition.
- No mention of data rights or monitoring in service.
- Autonomy claims without a path to certification.
In aviation, the best marketing line is still a dated, attributable result.
The business case: small percentages, enormous totals
AI in airlines is a game of one‑percenters. A few percent on climb, a few minutes on route, a few days earlier on maintenance scheduling—each looks modest in isolation. Together, across thousands of daily flights and hundreds of aircraft, they save fuel, time, parts, and goodwill. They also produce resilience: the ability to keep the schedule when storms pop up, to recover quickly when a hub is constrained, and to avoid compounding delays when a repeat defect tries to hide.
The tools mentioned here—AI route advisors, tail‑specific climb learning, NLP logbook mining, predictive twins, vision‑based cockpit assistance—form a stack. The more complete the stack, the greater the compounding effect. That is why the foundations matter so much: data rights, cybersecurity, and skills decide how fast the stack can grow.
AI in commercial aviation – What to watch next: concrete milestones that will show progress
- More public case studies like Alaska’s, with absolute numbers attached to fuel and time savings.
- Airline announcements adopting tail‑specific climb learning across entire fleets, not just subfleets.
- MRO platforms adding additional NLP modules—for example, parts request clustering or deferred defect analysis.
- Airframers publishing assistive vision results in more airports and weather regimes, with human‑factors data.
- EASA/FAA/ED‑324 updates that convert consultation into accepted means of compliance and test practices.
- IATA AOD principles showing up in airline‑vendor contracts, cutting deployment times for new AI modules.
These milestones will separate momentum from noise.
For more context on autonomy and ATC modernisation, see our ATC overhaul explainer.
Conclusion: the case for cautious ambition
Overall, AI in commercial aviation is earning its place flight by flight. Route advisors cut fuel and time; tail‑specific climb learning personalises performance; NLP finds repeat defects earlier; predictive twins keep engines on wing longer; vision aids promise clearer cues in the most complex phases. Regulators are not on the sidelines. They are writing the roadmaps, standards, and safety patterns that let learning systems prove themselves in service—bounded, monitored, and explainable.
The critical opinion here is simple. AI in commercial aviation should pursue cautious ambition: move fast where value is clear and risk is low; move deliberately where stakes are high; insist on runtime assurance and learning assurance as the price of admission; and invest as much in people and process as in code. The industry’s safety culture is its advantage, not a brake. AI that respects that culture will make flying quieter, steadier, and more reliable for passengers and more satisfying for professionals.
The closing question is for leaders, regulators, and engineers alike: if AI in commercial aviation already saves minutes and tonnes every day, what specific evidence—and what in‑service monitoring—will you require before letting the next AI‑assisted decision become part of your standard operating procedure?
Leave your comments below or on our Fliegerfaust Facebook page.
Update / Addendum — November 10, 2025
On November 10, 2025, EASA opened its first regulatory proposal on AI in aviation for public consultation: NPA 2025‑07 “Artificial intelligence trustworthiness.” The package sets out detailed specifications plus AMC/GM (“DS.AI”) that operationalise the EU AI Act’s high‑risk system requirements for aviation. Consultation runs for three months, with comments due by February 10, 2026 via EASA’s Comment Response Tool. This is the first step of Rulemaking Task RMT.0742; a second NPA in 2026 will propagate the framework into domain regulations. The proposal prioritises Level 1 (assistance to human) and Level 2 (human‑AI teaming) applications, initially covering data‑driven AI (supervised/unsupervised) and signalling later extensions to reinforcement learning, knowledge‑based, hybrid, and generative AI. (EASA)
Why this matters for what you just read
- It makes concrete the “assurance by design” theme in the article. DS.AI translates “trustworthiness” into certifiable artefacts and guidance, giving operators and OEMs a predictable path to approval rather than one‑off negotiations. (EASA)
- It confirms the “assistance‑first” glidepath I described: the near‑term focus is Level 1/Level 2 roles (dispatch advisors, climb optimisation, maintenance NLP, computer‑vision aids), with more adaptive AI addressed in later steps. (EASA)
- It aligns aviation with the EU AI Act by mapping aviation uses to Chapter III, Section 2 obligations and the aviation domains listed in Article 108—exactly the “common rulebook” pressure the article anticipated. (EASA)
What airlines (AOC holders), aircraft manufacturers (OEMs), maintenance organisations (MROs), training providers, and regulators should do now
- Classify current and planned AI functions against Level 1/Level 2 and gap‑assess them against DS.AI topics (assurance evidence, human factors, ethics, data governance). Submit feedback within the consultation window via the CRT. (EASA)
- Treat DS.AI as the baseline for safety cases and procurement: require vendors to document model purpose, training‑data provenance, known limitations, and in‑service monitoring—the same documentation discipline emphasised in this article. Comment deadline: February 10, 2026. (EASA)
- Plan for NPA #2 in 2026 to embed DS.AI across airworthiness/ops rules, and for later extensions that may cover reinforcement learning and generative AI—consistent with the article’s “bounded autonomy inside a safety cage” outlook. (EASA)
Sources: EASA news announcement (Nov 10, 2025) and the NPA 2025‑07 page with comment deadline and DS.AI materials. (EASA)
Leave your comments below or on our Fliegerfaust Facebook page.
Sources
Alaska Airlines — How AI is helping Alaska Airlines plan better flight routes and lower emissions (August 9, 2024).
Air India — Air India Group deploys SITA OptiFlight and SITA eWAS (September 12, 2025).
Airbus — Airbus tests new technologies to enhance pilot assistance (DragonFly) (January 12, 2023).
Accenture × Airbus — How Airbus builds faster and smarter with AI‑assisted visual intelligence (accessed October 2025).
Acubed (Airbus) — Introducing AVIA: Advanced Visual Intelligence for Assembly (accessed October 2025).
Airspace Intelligence — Air traffic management – Flyways overview (accessed October 2025).
Daedalean — Landing and traffic detection with computer vision (accessed October 2025).
EASA — Artificial Intelligence Roadmap 2.0 (May 10, 2023).
EU — Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 13, 2024).
EUROCAE — Open consultation for ED‑324 (AI process standard) (August 1, 2025).
GE Aerospace — FlightPulse app soars to 60,000 pilot users (October 9, 2025).
Lufthansa Technik — AVIATAR adds Technical Repetitives Examination (February 25, 2025).
Aviation Week — Lufthansa Technik debuts AVIATAR’s first AI tool (March 4, 2025).
NASA — Runtime Assurance guidance for complex systems (2023).
NVIDIA — Industrial Facility Digital Twins (accessed October 2025).
Pratt & Whitney — EngineWise Intelligence (accessed October 2025).
Rolls‑Royce — IntelligentEngine explainer (accessed October 2025).
SITA — OptiClimb (machine‑learning‑fed climb optimisation) (accessed October 2025).
Xwing — FAA’s first “standard UAS certification project” (October 19, 2023).
For full details, please refer to our Disclaimer page.



Une des premières manifestations d’IA dans le monde de l’aviation fut la mise en service du A320 en 1988. On n’utilisait peut-être pas le terme IA couramment à l’époque mais le Fly-by-wire était déjà une forme relativement avancée d’intelligence artificielle.