Weight

Projects aiming to electrify aviation are numerous. This is one strand to the vigorous effort to reduce the environmental impact of civil aviation. Clearly, feasible aircraft that do not use combustion are an attractive possibility. This step shows signs of being practical for the smaller sizes of aircraft.

Along the research road there are several hurdles that need to be overcome. One centres around the source of airborne power that is used. State-of-the-art battery technology is heavy. The combinations of materials used, and the modest power densities available result in the need for bulky batteries.

For any vehicle based on electric propulsion a chief challenge is not only to carry a useful load but to carry its own power source. These issues are evident in the introduction of electric road vehicles. They are by no means insurmountable, but they are quite different from conventional combustion engineered vehicles.

The density of conventional liquid fuels means that we get a big bang for your buck[1]. Not only that but as a flight progresses so the weight of fuel to be carried by an aircraft reduces. That’s two major pluses for kerosene. The major negative remains the environmental impact of its use.

Both electricity and conventional liquid fuels have a huge plus. The ground infrastructure needed to move them from A to B is well understood and not onerously expensive. It’s no good considering an aircraft design entirely in isolation. Any useful vehicle needs to be able to be re-powered easily, not too frequently and without breaking the bank[2].

Back to the subject of weight. It really is a number one concern. I recall a certain large helicopter design were the effort put into weight reduction was considerable. Design engineers were rushing around trying to shave-off even a tiny fraction of weight from every bit of kit. At one stage it was mooted that designers should remove all the handles from the avionics boxes in the e-bay of the aircraft. That was dismissed after further thought about how that idea would impact aircraft maintenance. However, suppliers were urged think again about equipment handling.

This extensive exercise happened because less aircraft weight equated to more aircraft payload. That simple equation was a massive commercial driver. It could be the difference between being competitive in the marketplace or being overtaken by others.

Aviation will always face this problem. Aircraft design is sensitive to weight. Not only does this mean maximum power at minimum weight, but this mean that what power that is available must be used in the most efficient manner possible.

So, is there a huge international investment in power electronics for aviation? Yes, it does come down to semiconductors. Now, there’s a lot of piggybacking[3] from the automotive industries. In my view that’s NOT good enough. [Sorry, about the idiom overload].


[1] https://dictionary.cambridge.org/dictionary/english/bang-for-the-buck

[2] https://dictionary.cambridge.org/dictionary/english/break-the-bank

[3] https://dictionary.cambridge.org/dictionary/english/piggybacking

UAP

….none of us are familiar with the variety in shape and size of flying machines currently being designed and developed for general use

There was a time when anyone raising the issue of the potential for an asteroid to send humans back to the stone age was mocked and derided. Anyone bringing apparent sci-fi plots into Parliament was jeered. Now, the subject is studied with intensity and considerable resources. The probabilities of Near-Earth Object[1] (NEO) impact is calculated, and small asteroid and comet orbits are monitored in detail.

Really bad films, like the one starring Bruce Willis have a lot to answer for. That space between fiction and reality gets filled with more than a few eccentrics and conspiracy theories. Trouble is that gives you, and me licence to smirk anytime cosmic occurrences come into discussion.

I must admit I like the term Unidentified Anomalous Phenomena (UAP) better than UFO. They are airborne phenomena, they are unidentified until we know better, and they are anomalous. Although, most reports are attributed to things that are known, even if they are rare events. Some are pooly reported and only scant evidence is avialable.

Discovering all there is to know about such airborne phenomena is a matter of both safety and security. However remote it might seem, part of this is the safety of aircraft in flight. I know of no examples of extra-terrestrial objects colliding with aircraft but it’s not impossible. I’m reminded of that classic picture of a bullet hitting a bullet in-flight and fusing together. It’s from the Battle of Gallipoli.

We might be entering a new era of transparency in the scientific study of UAP. This is a wholly good thing and highly necessary given the coming expansion in the number of air vehicles in flight. If Advanced Air Mobility (AAM) is going to do anything, it’s going to led to an increase in aviators and public reports. For one, none of us are familiar with the variety in shape and size of flying machines currently being designed and developed for general use. It’s likly that red and green lights moving through the sky at night is going to prompt public reports of the “unknown”.

Perspective plays a part too. A small drone close can look like a large airship at distance. As environmental conditions change so the perception of airborne objects can change dramatically. So, what we might observe and confidently attribute to be a drone or helicopter or aircraft in-flight is not always definitive. Applying disciplined scientific analysis to the data that is available has benefits.

Given that our airspace is likely to become ever more crowded, NASA’s study[2] of UAP has much merit. Recognising that resources are needed for this work is a lesson most nations need to learn. We can sit on our hands or giggle at the more ridiculous interpretations of observations, but this kind of reporting and analysis will be advantageous to aviation safety and security. It’s part of giving the public confidence that nothing unknown, unmanaged or uncontrolled is going on abover their heads too.

POST: UFOs: Five revelations from Nasa’s public meeting – BBC News


[1] https://neo.ssa.esa.int/home

[2] https://www.youtube.com/watch?v=bQo08JRY0iM

Don Bateman

At the start of the jet-age, changes in aircraft design and the improvement of maintenance procedures made a significant improvement in aviation safety. One set of accidents remain stubbornly difficult to reduce. This is the tragic case where a perfectly airworthy aircraft is flown into the ground or sea. Clearly the crew, in such cases had no intention to crash but never-the-less the crash happens. Loss of situation awareness, fixation on other problems or lack of adherence to standard operating procedures can all contribute to these aircraft accidents. So often these are fatal accidents.

One strategy for reducing accidents, where there is a significant human factor, is the implementation of suitable alerting and warning systems in the cockpit. It could be said that such aircraft systems support the vigilance of the crew and thus help reduce human error.

For decades the number one fatal accident category was Controlled Flight Into Terrain (CFIT). It always came top of global accident analysis reports. Pick up a book on the world’s major civil aircraft crashes since the 1960s and there will be a list of CFIT accidents. By the way, this term CFIT is an internationally agreed category for classifying accidents[1]. 20-years ago, I was part of a team that managed these classifications.

When I started work on aircraft certification, in the early 1990s, the Ground Proximity Warning System (GPWS) already existed. A huge amount of work had been done since the 1970s defining and refining a set of protection envelopes that underpinned cockpit warnings aimed at avoiding CFIT.

UK CAA Specification 14 on GPWS dates from 1976[2]. This safety equipment had been mandated in many countries for certain types of public transport aircraft operation. It was by no means fitted to all aircraft and all types of aircraft operation. This was highlighted when an Air Inter AIRBUS A320 crashed near Strasbourg, in France in January 1992[3].

No alerting or warning system is perfect. GPWS had been successful in reducing the number of CFIT accidents but there were still occurrences where the equipment proved ineffective or was ignored.

I first met Don Bateman[4] on one of his whistles-stop tours presenting detailed analysis of CFIT accidents and the latest versions of the GPWS. At that time, he was working for the company Sundstrand[5], based in Redmond in Washington State, US. It was a time when Enhanced GPWS (EGPWS)[6] was being promoted. This version of the equipment had an added capability to address approaches to runways where the classic GPWS was known to give false results. False alerts and warnings are the enemy of any aircraft system since they reduce a crew’s confidence in its workings.

My role was the UK approval of the systems and equipment. Over a decade the industry moved from a basic GPWS to EGPWS to what we have now, Terrain Avoidance and Warning Systems (TAWS).

When I think of Don Bateman’s contribution[7], there are few people who have advanced global aviation safety as much as he did. His dedication to driving forward GPWS ensured the technology became almost universal. Consequently, there must be a large number of lives saved because of the CFIT accidents that did not happen.

He left no doubt as to his passion for aviation safety, was outstandingly professional and a pleasure to work with on every occasion. This work was an example of a positive and constructive partnership between aviation authorities and industry. We need more of that approach.

POST 1: Don Bateman Saved More Lives Than Anyone in Aviation History | Aviation Pros

POST 2: Don Bateman, ‘Father’ Of Terrain Awareness Warning Systems, Dies At 91 | Aviation Week Network


[1] https://www.intlaviationstandards.org/Documents/CICTTStandardBriefing.pdf

[2] https://publicapps.caa.co.uk/docs/33/CASPEC14.PDF

[3] https://reports.aviation-safety.net/1992/19920120-0_A320_F-GGED.pdf

[4] https://www.invent.org/inductees/c-donald-bateman

[5] https://archive.seattletimes.com/archive/?date=19930125&slug=1681820

[6] https://aerospace.honeywell.com/us/en/pages/enhanced-ground-proximity-warning-system

[7] https://aviationweek.com/air-transport/safety-ops-regulation/don-bateman-father-terrain-awareness-warning-systems-dies-91

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Working hard for the money

What goes wrong with research spending? It’s a good question to ask. In some ways research spending is like advertising spending. “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.[1]” Globally billions are spent on advertising so you might say – it must be working. In fact, far more is spent on advertising than is ever available for research in the aviation and aerospace world.

Research spending is a precious asset because of its bounds. Even so, a great deal of research spending is lost on activities that deliver no or little benefit. It’s true Governments, institutions and industry don’t often put-up funds for vague and imprecise aspirations or outlandish predictions but nevertheless money goes down a sink hole on far too many occasions.

A reluctance to take tough decisions or at the other extreme of the spectrum a relish in disruption plagues research funding decision making. Bad projects can live long lives and good projects get shut down before their time. My observations are that these are some of the cases that crop-up all too often across the world.

Continuing to service infrastructure that cost a great deal to set-up. It’s the classic problem of having spent large sums of money on something and thereby the desperation to see a benefit encourages more spending. Nobody likes to admit defeat or that their original predictions were way off the mark.

Circles of virtue are difficult to address. For example, everyone wants to see a more efficient and sustainable use of valuable airspace therefore critics of spending towards that objective are not heard. That is even if substantial spending is misdirected or hopelessly optimistic.

Glamourous and sexy subjects, often in the public limelight, get a leg-up when it come to the evaluation of potential research projects. Politicians love press photographs that associate them with something that looks like a solution in the public mind. Academics are no different in that respect.

Behold unto the gurus! There’s conferences and symposiums where ideas are hammered home by persuasive speakers and charismatic thinkers. Amongst these forums there are innovative ideas but also those that get more consideration than they warrant.

Narrow focused recommendations can distort funding decision making. With the best of intent an investigation or study group might highlight a deficiency that needs work, but it sits in a distinct niche of interest. It can be a push in direction the opposite of a Pareto analysis[2].

Highlighting these points is easier than fixing the underlying problems. It’s a good start to be aware of them before pen and ink meets, and a contract is signed.


[1] statement on advertising, credited to both John Wanamaker (1838-1922) and Lord Leverhulme (1851-1925).

[2] https://asq.org/quality-resources/pareto

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Radio on the hill

We take radio for granted. I’m listening to it, now. That magic of information transferred through the “ether[1]” at the speed of light and without wires. This mystery was unravelled first in the 19th century. Experimentation and mathematics provided insights into electromagnetics.

The practical applications of radio waves were soon recognised. The possibility of fast information transfer between A and B had implications for the communications and the battlefield.

It’s unfortunate to say that warfare often causes science to advance rapidly. The urgency to understand more is driven by strong needs. That phrase “needs must” comes to mind. We experienced this during the COVID pandemic. Science accelerated to meet the challenge.

It wasn’t until after he failed as an artist that Samuel Morse transformed communications by inventing the telegraph with his dots and dashes. There’s a telegraph gallery with a reproductions of Morse’s early equipment at the Locust Grove Estate[2] in Poughkeepsie. I’d recommend it.

The electromagnetic telegraph used wires to connect A and B. Clearly, that’s not useful if the aim is to connect an aircraft with the ground.

The imperative to make air-ground communication possible came from the first world war. Aviation’s role in warfare came to the fore. Not just in surveillance of the enemy but offensive actions too. Experimentation with airborne radio involved heavy batteries and early spark transmitters. Making such crude equipment usable was an immense challenge. 

Why am I writing about this subject? This week, on a whim I visited the museum at Biggen Hill. The Biggin Hill Museum[3] tells the story the pivotal role played by the fighter station in the second world war. The lesser-known story is the origins of the station.

It’s one of Britain’s oldest aerodromes and sits high up on the hills south of London. Biggin Hill is one of the highest points in that area, rising to over 210 metres (690 ft) above sea level. 

It’s transformation from agricultural fields to a research station (south camp) took place in 1916 and 1917. Its purpose was to explore the scientific and technical innovations of that time. Wireless in particular.  141 Squadron of the Royal Flying Corps (RFC) was based at Biggin Hill and equipped with Bristol Fighters.[9] RFC were the first to take use of wireless telegraphy to assist with artillery targeting.

These were the years before the Royal Air Force (RAF) was formed.

100 years later, in early 2019, the Biggin Hill Museum opened its doors to the public. It’s a small museum but well worth a visit. I found the stories of the early development of airborne radio communications fascinating. So much we take for granted had to be invented, tested, and developed from the most elemental components.

POST 1: Now, I wish I’d be able to attand this lecture – Isle of Wight Branch: The Development of Airborne Wireless for the R.F.C. (aerosociety.com)

POST 2: The bigger story marconiheritage.org/ww1-air.html


[1] https://www.britannica.com/science/ether-theoretical-substance

[2] https://www.lgny.org/home

[3] https://bigginhillmuseum.com/