Don Bateman

At the start of the jet-age, changes in aircraft design and the improvement of maintenance procedures made a significant improvement in aviation safety. One set of accidents remain stubbornly difficult to reduce. This is the tragic case where a perfectly airworthy aircraft is flown into the ground or sea. Clearly the crew, in such cases had no intention to crash but never-the-less the crash happens. Loss of situation awareness, fixation on other problems or lack of adherence to standard operating procedures can all contribute to these aircraft accidents. So often these are fatal accidents.

One strategy for reducing accidents, where there is a significant human factor, is the implementation of suitable alerting and warning systems in the cockpit. It could be said that such aircraft systems support the vigilance of the crew and thus help reduce human error.

For decades the number one fatal accident category was Controlled Flight Into Terrain (CFIT). It always came top of global accident analysis reports. Pick up a book on the world’s major civil aircraft crashes since the 1960s and there will be a list of CFIT accidents. By the way, this term CFIT is an internationally agreed category for classifying accidents[1]. 20-years ago, I was part of a team that managed these classifications.

When I started work on aircraft certification, in the early 1990s, the Ground Proximity Warning System (GPWS) already existed. A huge amount of work had been done since the 1970s defining and refining a set of protection envelopes that underpinned cockpit warnings aimed at avoiding CFIT.

UK CAA Specification 14 on GPWS dates from 1976[2]. This safety equipment had been mandated in many countries for certain types of public transport aircraft operation. It was by no means fitted to all aircraft and all types of aircraft operation. This was highlighted when an Air Inter AIRBUS A320 crashed near Strasbourg, in France in January 1992[3].

No alerting or warning system is perfect. GPWS had been successful in reducing the number of CFIT accidents but there were still occurrences where the equipment proved ineffective or was ignored.

I first met Don Bateman[4] on one of his whistles-stop tours presenting detailed analysis of CFIT accidents and the latest versions of the GPWS. At that time, he was working for the company Sundstrand[5], based in Redmond in Washington State, US. It was a time when Enhanced GPWS (EGPWS)[6] was being promoted. This version of the equipment had an added capability to address approaches to runways where the classic GPWS was known to give false results. False alerts and warnings are the enemy of any aircraft system since they reduce a crew’s confidence in its workings.

My role was the UK approval of the systems and equipment. Over a decade the industry moved from a basic GPWS to EGPWS to what we have now, Terrain Avoidance and Warning Systems (TAWS).

When I think of Don Bateman’s contribution[7], there are few people who have advanced global aviation safety as much as he did. His dedication to driving forward GPWS ensured the technology became almost universal. Consequently, there must be a large number of lives saved because of the CFIT accidents that did not happen.

He left no doubt as to his passion for aviation safety, was outstandingly professional and a pleasure to work with on every occasion. This work was an example of a positive and constructive partnership between aviation authorities and industry. We need more of that approach.

POST 1: Don Bateman Saved More Lives Than Anyone in Aviation History | Aviation Pros

POST 2: Don Bateman, ‘Father’ Of Terrain Awareness Warning Systems, Dies At 91 | Aviation Week Network


[1] https://www.intlaviationstandards.org/Documents/CICTTStandardBriefing.pdf

[2] https://publicapps.caa.co.uk/docs/33/CASPEC14.PDF

[3] https://reports.aviation-safety.net/1992/19920120-0_A320_F-GGED.pdf

[4] https://www.invent.org/inductees/c-donald-bateman

[5] https://archive.seattletimes.com/archive/?date=19930125&slug=1681820

[6] https://aerospace.honeywell.com/us/en/pages/enhanced-ground-proximity-warning-system

[7] https://aviationweek.com/air-transport/safety-ops-regulation/don-bateman-father-terrain-awareness-warning-systems-dies-91

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Working hard for the money

What goes wrong with research spending? It’s a good question to ask. In some ways research spending is like advertising spending. “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.[1]” Globally billions are spent on advertising so you might say – it must be working. In fact, far more is spent on advertising than is ever available for research in the aviation and aerospace world.

Research spending is a precious asset because of its bounds. Even so, a great deal of research spending is lost on activities that deliver no or little benefit. It’s true Governments, institutions and industry don’t often put-up funds for vague and imprecise aspirations or outlandish predictions but nevertheless money goes down a sink hole on far too many occasions.

A reluctance to take tough decisions or at the other extreme of the spectrum a relish in disruption plagues research funding decision making. Bad projects can live long lives and good projects get shut down before their time. My observations are that these are some of the cases that crop-up all too often across the world.

Continuing to service infrastructure that cost a great deal to set-up. It’s the classic problem of having spent large sums of money on something and thereby the desperation to see a benefit encourages more spending. Nobody likes to admit defeat or that their original predictions were way off the mark.

Circles of virtue are difficult to address. For example, everyone wants to see a more efficient and sustainable use of valuable airspace therefore critics of spending towards that objective are not heard. That is even if substantial spending is misdirected or hopelessly optimistic.

Glamourous and sexy subjects, often in the public limelight, get a leg-up when it come to the evaluation of potential research projects. Politicians love press photographs that associate them with something that looks like a solution in the public mind. Academics are no different in that respect.

Behold unto the gurus! There’s conferences and symposiums where ideas are hammered home by persuasive speakers and charismatic thinkers. Amongst these forums there are innovative ideas but also those that get more consideration than they warrant.

Narrow focused recommendations can distort funding decision making. With the best of intent an investigation or study group might highlight a deficiency that needs work, but it sits in a distinct niche of interest. It can be a push in direction the opposite of a Pareto analysis[2].

Highlighting these points is easier than fixing the underlying problems. It’s a good start to be aware of them before pen and ink meets, and a contract is signed.


[1] statement on advertising, credited to both John Wanamaker (1838-1922) and Lord Leverhulme (1851-1925).

[2] https://asq.org/quality-resources/pareto

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Radio on the hill

We take radio for granted. I’m listening to it, now. That magic of information transferred through the “ether[1]” at the speed of light and without wires. This mystery was unravelled first in the 19th century. Experimentation and mathematics provided insights into electromagnetics.

The practical applications of radio waves were soon recognised. The possibility of fast information transfer between A and B had implications for the communications and the battlefield.

It’s unfortunate to say that warfare often causes science to advance rapidly. The urgency to understand more is driven by strong needs. That phrase “needs must” comes to mind. We experienced this during the COVID pandemic. Science accelerated to meet the challenge.

It wasn’t until after he failed as an artist that Samuel Morse transformed communications by inventing the telegraph with his dots and dashes. There’s a telegraph gallery with a reproductions of Morse’s early equipment at the Locust Grove Estate[2] in Poughkeepsie. I’d recommend it.

The electromagnetic telegraph used wires to connect A and B. Clearly, that’s not useful if the aim is to connect an aircraft with the ground.

The imperative to make air-ground communication possible came from the first world war. Aviation’s role in warfare came to the fore. Not just in surveillance of the enemy but offensive actions too. Experimentation with airborne radio involved heavy batteries and early spark transmitters. Making such crude equipment usable was an immense challenge. 

Why am I writing about this subject? This week, on a whim I visited the museum at Biggen Hill. The Biggin Hill Museum[3] tells the story the pivotal role played by the fighter station in the second world war. The lesser-known story is the origins of the station.

It’s one of Britain’s oldest aerodromes and sits high up on the hills south of London. Biggin Hill is one of the highest points in that area, rising to over 210 metres (690 ft) above sea level. 

It’s transformation from agricultural fields to a research station (south camp) took place in 1916 and 1917. Its purpose was to explore the scientific and technical innovations of that time. Wireless in particular.  141 Squadron of the Royal Flying Corps (RFC) was based at Biggin Hill and equipped with Bristol Fighters.[9] RFC were the first to take use of wireless telegraphy to assist with artillery targeting.

These were the years before the Royal Air Force (RAF) was formed.

100 years later, in early 2019, the Biggin Hill Museum opened its doors to the public. It’s a small museum but well worth a visit. I found the stories of the early development of airborne radio communications fascinating. So much we take for granted had to be invented, tested, and developed from the most elemental components.

POST 1: Now, I wish I’d be able to attand this lecture – Isle of Wight Branch: The Development of Airborne Wireless for the R.F.C. (aerosociety.com)

POST 2: The bigger story marconiheritage.org/ww1-air.html


[1] https://www.britannica.com/science/ether-theoretical-substance

[2] https://www.lgny.org/home

[3] https://bigginhillmuseum.com/

Digital toxicity

There’s a tendency to downplay the negative aspects of the digital transition that’s happening at pace. Perhaps it’s the acceptance of the inevitability of change and only hushed voices of objection.

A couple of simple changes struck me this week. One was my bank automatically moving me to an on-line statement and the other was a news story about local authorities removing pay machines from car parks on the assumption everyone has a mobile phone.

With these changes there’s a high likelihood that difficulties are going to be caused for a few people. Clearly, the calculation of the banks and local authorities is that the majority rules. Exclusion isn’t their greatest concern but saving money is high on their list of priorities.

The above aside, my intention was to write about more general toxic impacts of the fast-moving digital transition. Now, please don’t get me wrong. In most situations such a transition has widespread benefits. What’s of concern is the few mitigations for any downsides.

Let’s list a few negatives that may need more attention.

Addiction. With social media this is unquestionable[1]. Afterall digital algorithms are developed to get people engaged and keep them engaged for as long as possible. It’s the business model that brings in advertising revenues. There’s FOMO too. That’s a fear of missing out on something new or novel that others might see but you might miss out on.

Attention. Rapidly stroking a touch screen to move from image to image, or video to video encourages less attention to be given to any one piece of information. What research there is shows a general decline in the attention span[2] as a characteristic of being subject to increasing amounts of information, easily made available.

Adoration. Given that so many digital functions are provided with astonishing accuracy, availability, and speed there’s a natural inclination to trust their output. When that trust is justifiable for a high percentage of the time, the few times information is in error can easily be ignored or missed. This can lead to people defending or supporting information that is wrong[3] or misleading.

It’s reasonable to say there are downsides with any use of technology. That said, it’s as well to try to mitigate those that are known about and understood. The big problem is the cumulative effect of the downsides. This can increase fragility and vulnerability of the systems that we all depend upon.

If digital algorithms were medicines or drugs, there would be a whole array of tests conducted before their public release. Some would be strongly regulated. I’m not saying that’s the way to go but it’s a sobering thought.


[1] https://www.theguardian.com/global/2021/aug/22/how-digital-media-turned-us-all-into-dopamine-addicts-and-what-we-can-do-to-break-the-cycle

[2] https://www.kcl.ac.uk/news/are-attention-spans-really-collapsing-data-shows-uk-public-are-worried-but-also-see-benefits-from-technology

[3] https://www.bbc.co.uk/news/business-56718036

Comms

The long history of data communications between air and ground has had numerous stops and starts. It’s not new to use digital communications while flying around the globe. That said, it has not been cheap, and traditional systems have evolved only slowly. If we think Controller Pilot Data Link Communications (CPDLC)[1] is quite whizzy. It’s not. It belongs to a Windows 95 generation. Clunky messages and limited applications.

The sluggishness of adoption of digital communications in commercial aviation has been for several reasons. For one, standardised, certified, and maintainable systems and equipment have been expensive. It’s not just the purchase and installation but the connection charges that mount-up.

Unsurprisingly, aircraft operators have moved cautiously unless they can identify an income stream to be developed from airborne communication. That’s one reason why the passengers accessing the internet from their seats can have better connections than the two-crew in the cockpit.

Larger nations’ military flyers don’t have a problem spending money on airborne networking. For them it’s an integral part of being able to operate effectively. In the civil world, each part of the aviation system must make an economic contribution or be essential to safety to make the cut.

The regulatory material applicable to Airborne Communications, Navigation and Surveillance (CS-ACNS)[2] can be found in publications coming from the aviation authorities. This material has the purpose of ensuring a high level of safety and aircraft interoperability. Much of this generally applicable material has evolved slowly over the last 30-years.

Now, it’s good to ask – is this collection of legacy aviation system going to be changed by the new technologies that are rapidly coming on-stream this year? Or are the current mandatory equipage requirements likely to stay the same but be greatly enhanced by cheaper, faster, and lower latency digital connections?

This year, Starlink[3] is offering high-speed, in-flight internet connections with global connectivity. This company is not the only one developing Low Earth Orbit (LEO)[4] satellite communications. There are technical questions to be asked in respect of safety, performance, and interoperability but it’s a good bet that these new services will very capable and what’s more, not so expensive[5].

It’s time for airborne communications to step into the internet age.

NOTE: The author was a part of the EUROCAE/RTCA Special Committee 169 that created Minimum Operational Performance Standards for ATC Two-Way Data Link Communications back in the 1990s.

POST 1: Elon Musk’s Starlink Internet Service Coming to US Airlines; Free WiFi (businessinsider.com)

POST 2: With the mandate of VDLM2 we evolve at the pace of a snail. Internet Protocol (IP) Data Link may not be suitable for all uses but there’s a lot more that can be done.


[1] https://skybrary.aero/articles/controller-pilot-data-link-communications-cpdlc

[2] https://www.easa.europa.eu/en/document-library/easy-access-rules/easy-access-rules-airborne-communications-navigation-and

[3] https://www.starlink.com/

[4] https://www.esa.int/ESA_Multimedia/Images/2020/03/Low_Earth_orbit

[5] https://arstechnica.com/information-technology/2022/10/starlink-unveils-airplane-service-musk-says-its-like-using-internet-at-home/