AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Head in Sand

Well, it’s happened. A debate. Are we any wiser? Well, not much. So many good points are raised but so many good points are dismissed by current Government Ministers. So deep are they in a mess of their own making.

On Monday, 24 April at 16:30, a UK Parliamentary debate[1] took place on the impact of the UK’s exit from the European Union (EU). This was consideration of e-petition[2] 628-226 relating to the impact of the UK’s exit from the EU. On the day of this debate this petition had attracted over 178 000 signatures. Petition debates are “general” debates which allow UK Members of Parliament (MPs) from all political parties to discuss important issues raised by the public.

The petition reasons that the benefits that were promised, if the UK exited the EU have not been delivered. Not at all. Although this fact might be self-evident it never-the-less warranted a timely debate. Public support for Brexit is falling as every day that goes by.

The petitioners called upon the UK Government to hold a public inquiry to assess the impact that Brexit has had on this country and its people. Given that other less impactful events have been subject to a public inquiry it seems only right that Brexit be investigated.

The call for an independent public inquiry, free from ideology and the opinions of vested interests is only fair, right, and proper in an accountable democratic 21st Century country. Transparency is a mark of good governance.

Today’s, Brexit is damaging the UK’s economy, opportunities for young people and rights of individuals. It’s well past the time that the people of the UK were told the full story. There needs to be a way out of this mess.

In the debate the point was made that the two biggest Westminster political parties continue to be committed to Brexit despite the harm that it’s doing to the UK. A long list of disbenefits were rattled off as speakers paced through the evidence. A long list that is growing.

The Government’s current approach is to ask UK Parliamentarians to stop talking about Brexit. It’s the ultimate ostrich with its head in the sand[3]. Brexit is a gigantic strategic mistake. Unfortunately, there remains a significant number of English politicians so entrenched in the mythology of Brexit that change is slow in coming. The public are way ahead of the politicians.

Stereotyping people as being in one camp or another, with the aim of continuing to divide the public is the unscrupulous tool of those people without a rational or coherent argument to make. It’s clear, progress will not be made until Ministers recognise that Brexit was a mistake. We may have to wait until after the next UK General Election before a real change is possible. Let’s hope that day comes soon.

POST 1: UK Press reports on the debate MPs debate consequences of Brexit for first time | The Independent MPs debate Brexit impact ‘for the first time since leaving the EU’ | The National Brexit: MPs call for public inquiry into impact of leaving EU – BBC News

POST 2: Brexit is a drag on the UK Sunak Grins And Bears It As Boss Hits Out At Brexit’s ‘Drag On Growth’ | HuffPost UK Politics (huffingtonpost.co.uk)


[1] https://youtu.be/iHzf1BQFXq8

[2] https://petition.parliament.uk/

[3] It’s a myth ostriches bury their head in the sand. Though this isn’t true, Ostrich Syndrome is a popular belief. It’s avoidance coping that people use to manage uncomfortable feelings or rather, not deal with them.

Light touch folly

Light touch regulation. Now, there’s a senseless folly. It’s a green light to bad actors wherever they operate. It’s like building a medieval castle’s walls half as thick as planned to save money in the belief that enemies are too stupid to work it out. Saying that the public good far less important than the speed of developments is unwise to say the least.

The INTERNET arrived in the UK in the late 1980s. Now, it seems strange to recount. Clunky Personal Commuters (PCs) and basic e-mail were the hight of sophistication as we moved from an office of typewriters and Tipp-Ex to the simple word processor[1]. Generations will marvel at the primitive nature of our former working lives. Getting scissors and cutting out paper text and pasting it into a better place in a draft document. Tippexing out errors and scribbling notes in the spaces between sentences. Yet, that’s what we did when first certifying many of the commercial airliners in regular use across the globe (Boeing 777. Airbus A320). Desktop computers took centre stage early in the 1990s, but administrations were amid a transition. Clickable icons hit screens in 1990. Gradually and progressively new ways of working evolved.

Microsoft Windows 95 and the INTERNET were heralded as the dawn of a new age. Not much thought was given to PCs being used for criminal or malicious purposes. No more thought than the use of a typewriter to commit crime. That doesn’t mean such considerations were ignored it just means that they were deemed a lower-level importance.

In 2023, everyday there’s a new warning about scammers. Even fake warnings about scammers coming from scammers with the aim of scamming. Identifying whose real and whose a fake is becoming ever more difficult. Being asked to update subscriptions that were never opened in the first places is a good indicator that there’s some dirty work afoot. Notices that accounts are about to be blocked referring to accounts that don’t exist is another.

In 30-years the INTERNET has taken on the good and bad of the greater world. It hasn’t become a safer place. In fact, it’s become a bit like the Wild West[2].

Our digital space continues to evolve but has nowhere near reached its potential. It’s like those great western plains where waggons headed out looking for rich new lands. In any towns on the way the shop fronts are gleaming and inviting but if you look around the back there’s a desperate attempt to keep bad actors at bay.

Only a fraction of the suspicious, emails, texts, and messages get reported. People unconsciously pile up a digital legacy and rarely clean out the trash that accumulates. A rich messiness of personal information can lie hidden to the eyes but just bellow the digital surface.

When politicians and technocrats talk of “light touch regulation” it’s as if none of this matters. In the race to be first in technology, public protection is given a light touch. This can’t be a good way to go.


[1] Still available – Tipp-Ex Rapid, Correction Fluid Bottle, High Quality Correction Fluid, Excellent Coverage, 20ml, Pack of 3, white.

[2] https://en.wikipedia.org/wiki/American_frontier

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Pause

An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Progress?

For 99p in a well-known charity book shop, I picked up a tidy little paperback book. It’s wonderfully illustrated, mixing humour with one or two earnest thoughts. Originally, it would have been about 3 shillings[1] (15 new pence) to buy. So, I may have paid over the odds.

Was C. Northcote Parkinson[2], right? Certainly, when I listen to the epic tale on HS2[3] it does get me wondering if Parkinson’s Law works as well in the 2020s as it did in the late 1950s. Progress is slow, as work expands. The more there is to do, the more there is to do.

The UK’s number one railway project, High Speed Two, HS2 is a massive project. It’s image of yellow jacketed workers stomping across chewed-up fields is a long way from the reality. In the back rooms and offices are thousands of planners, managers, and administrators toiling intensely. Politicians posture over reams of reports and change their minds at every juncture. There’s a hitch every week.

Given my experiences, I should be able to make some judgements about Parkinson’s Law. That is to say that: work expands so as to fill the time available for its completion. It’s generally associated with Government administration and the operation of a civil service. My observation is that large scale industry is just as guilty of this characteristic. 

It can be said that a large aircraft could not be certified until the pile of paper needed to do so weighted as much as the finished product. This tong-in-cheek saying stems from the frustration that builds-up when progress is slower than people would like it to be. What a “pile of paper” means in the digital world is more difficult to ascertain but it’s a lot of stuff. 

Whatever the merit of Parkinson’s Law, the arguments made for it have been undermined as employment practices have changed dramatically since the 1950s. Internal structures of bureaucratic and deep hierarchical organisation are no longer the fashion. The whole phenomena of Buggins’ turn[4] still exists but is in abeyance. Much of industry may have shaken it off, but the political world still clings-on and offers jobs on seniority rather than by merit. Hierarchical organisations that feed on a certainty of their continued existence remain plentiful, but they are now more subject to more disruption.

Parkinson does mock the large organisations of his time. Some of his anecdotes resonate perfectly with the world of the 2020s. These are observations of human behaviour.

One that rings a bell with me is the description of a board meeting were agenda items are methodically addressed in order. Let’s say, the subject of item 9 on an agenda is for a major investment expenditure and the next item, item 10 addresses staff car parking spaces. No prizes for guessing which one gets the most discussion time. When faced with complex financial arguments and detailed pages of figures there’s a tendance to defer to those who know about that sort of stuff. When faced with a subject that everyone understands and impacts everyone in an obvious way, the temptation to engage in discussion about the later is overwhelming.

Let’s conclude that progress that doesn’t take account of the human factor is going to hit the rails or maybe worse.


[1] https://www.royalmintmuseum.org.uk/journal/curators-corner/shilling/

[2] Parkinson’s Law or The Pursuit of Progress, John Murry Paperbacks, 1958.

[3] https://www.hs2.org.uk/

[4] https://english.stackexchange.com/questions/171256/who-was-buggins-of-buggins-turn

Digital toxicity

There’s a tendency to downplay the negative aspects of the digital transition that’s happening at pace. Perhaps it’s the acceptance of the inevitability of change and only hushed voices of objection.

A couple of simple changes struck me this week. One was my bank automatically moving me to an on-line statement and the other was a news story about local authorities removing pay machines from car parks on the assumption everyone has a mobile phone.

With these changes there’s a high likelihood that difficulties are going to be caused for a few people. Clearly, the calculation of the banks and local authorities is that the majority rules. Exclusion isn’t their greatest concern but saving money is high on their list of priorities.

The above aside, my intention was to write about more general toxic impacts of the fast-moving digital transition. Now, please don’t get me wrong. In most situations such a transition has widespread benefits. What’s of concern is the few mitigations for any downsides.

Let’s list a few negatives that may need more attention.

Addiction. With social media this is unquestionable[1]. Afterall digital algorithms are developed to get people engaged and keep them engaged for as long as possible. It’s the business model that brings in advertising revenues. There’s FOMO too. That’s a fear of missing out on something new or novel that others might see but you might miss out on.

Attention. Rapidly stroking a touch screen to move from image to image, or video to video encourages less attention to be given to any one piece of information. What research there is shows a general decline in the attention span[2] as a characteristic of being subject to increasing amounts of information, easily made available.

Adoration. Given that so many digital functions are provided with astonishing accuracy, availability, and speed there’s a natural inclination to trust their output. When that trust is justifiable for a high percentage of the time, the few times information is in error can easily be ignored or missed. This can lead to people defending or supporting information that is wrong[3] or misleading.

It’s reasonable to say there are downsides with any use of technology. That said, it’s as well to try to mitigate those that are known about and understood. The big problem is the cumulative effect of the downsides. This can increase fragility and vulnerability of the systems that we all depend upon.

If digital algorithms were medicines or drugs, there would be a whole array of tests conducted before their public release. Some would be strongly regulated. I’m not saying that’s the way to go but it’s a sobering thought.


[1] https://www.theguardian.com/global/2021/aug/22/how-digital-media-turned-us-all-into-dopamine-addicts-and-what-we-can-do-to-break-the-cycle

[2] https://www.kcl.ac.uk/news/are-attention-spans-really-collapsing-data-shows-uk-public-are-worried-but-also-see-benefits-from-technology

[3] https://www.bbc.co.uk/news/business-56718036

Comms

The long history of data communications between air and ground has had numerous stops and starts. It’s not new to use digital communications while flying around the globe. That said, it has not been cheap, and traditional systems have evolved only slowly. If we think Controller Pilot Data Link Communications (CPDLC)[1] is quite whizzy. It’s not. It belongs to a Windows 95 generation. Clunky messages and limited applications.

The sluggishness of adoption of digital communications in commercial aviation has been for several reasons. For one, standardised, certified, and maintainable systems and equipment have been expensive. It’s not just the purchase and installation but the connection charges that mount-up.

Unsurprisingly, aircraft operators have moved cautiously unless they can identify an income stream to be developed from airborne communication. That’s one reason why the passengers accessing the internet from their seats can have better connections than the two-crew in the cockpit.

Larger nations’ military flyers don’t have a problem spending money on airborne networking. For them it’s an integral part of being able to operate effectively. In the civil world, each part of the aviation system must make an economic contribution or be essential to safety to make the cut.

The regulatory material applicable to Airborne Communications, Navigation and Surveillance (CS-ACNS)[2] can be found in publications coming from the aviation authorities. This material has the purpose of ensuring a high level of safety and aircraft interoperability. Much of this generally applicable material has evolved slowly over the last 30-years.

Now, it’s good to ask – is this collection of legacy aviation system going to be changed by the new technologies that are rapidly coming on-stream this year? Or are the current mandatory equipage requirements likely to stay the same but be greatly enhanced by cheaper, faster, and lower latency digital connections?

This year, Starlink[3] is offering high-speed, in-flight internet connections with global connectivity. This company is not the only one developing Low Earth Orbit (LEO)[4] satellite communications. There are technical questions to be asked in respect of safety, performance, and interoperability but it’s a good bet that these new services will very capable and what’s more, not so expensive[5].

It’s time for airborne communications to step into the internet age.

NOTE: The author was a part of the EUROCAE/RTCA Special Committee 169 that created Minimum Operational Performance Standards for ATC Two-Way Data Link Communications back in the 1990s.

POST 1: Elon Musk’s Starlink Internet Service Coming to US Airlines; Free WiFi (businessinsider.com)

POST 2: With the mandate of VDLM2 we evolve at the pace of a snail. Internet Protocol (IP) Data Link may not be suitable for all uses but there’s a lot more that can be done.


[1] https://skybrary.aero/articles/controller-pilot-data-link-communications-cpdlc

[2] https://www.easa.europa.eu/en/document-library/easy-access-rules/easy-access-rules-airborne-communications-navigation-and

[3] https://www.starlink.com/

[4] https://www.esa.int/ESA_Multimedia/Images/2020/03/Low_Earth_orbit

[5] https://arstechnica.com/information-technology/2022/10/starlink-unveils-airplane-service-musk-says-its-like-using-internet-at-home/