Don Bateman

At the start of the jet-age, changes in aircraft design and the improvement of maintenance procedures made a significant improvement in aviation safety. One set of accidents remain stubbornly difficult to reduce. This is the tragic case where a perfectly airworthy aircraft is flown into the ground or sea. Clearly the crew, in such cases had no intention to crash but never-the-less the crash happens. Loss of situation awareness, fixation on other problems or lack of adherence to standard operating procedures can all contribute to these aircraft accidents. So often these are fatal accidents.

One strategy for reducing accidents, where there is a significant human factor, is the implementation of suitable alerting and warning systems in the cockpit. It could be said that such aircraft systems support the vigilance of the crew and thus help reduce human error.

For decades the number one fatal accident category was Controlled Flight Into Terrain (CFIT). It always came top of global accident analysis reports. Pick up a book on the world’s major civil aircraft crashes since the 1960s and there will be a list of CFIT accidents. By the way, this term CFIT is an internationally agreed category for classifying accidents[1]. 20-years ago, I was part of a team that managed these classifications.

When I started work on aircraft certification, in the early 1990s, the Ground Proximity Warning System (GPWS) already existed. A huge amount of work had been done since the 1970s defining and refining a set of protection envelopes that underpinned cockpit warnings aimed at avoiding CFIT.

UK CAA Specification 14 on GPWS dates from 1976[2]. This safety equipment had been mandated in many countries for certain types of public transport aircraft operation. It was by no means fitted to all aircraft and all types of aircraft operation. This was highlighted when an Air Inter AIRBUS A320 crashed near Strasbourg, in France in January 1992[3].

No alerting or warning system is perfect. GPWS had been successful in reducing the number of CFIT accidents but there were still occurrences where the equipment proved ineffective or was ignored.

I first met Don Bateman[4] on one of his whistles-stop tours presenting detailed analysis of CFIT accidents and the latest versions of the GPWS. At that time, he was working for the company Sundstrand[5], based in Redmond in Washington State, US. It was a time when Enhanced GPWS (EGPWS)[6] was being promoted. This version of the equipment had an added capability to address approaches to runways where the classic GPWS was known to give false results. False alerts and warnings are the enemy of any aircraft system since they reduce a crew’s confidence in its workings.

My role was the UK approval of the systems and equipment. Over a decade the industry moved from a basic GPWS to EGPWS to what we have now, Terrain Avoidance and Warning Systems (TAWS).

When I think of Don Bateman’s contribution[7], there are few people who have advanced global aviation safety as much as he did. His dedication to driving forward GPWS ensured the technology became almost universal. Consequently, there must be a large number of lives saved because of the CFIT accidents that did not happen.

He left no doubt as to his passion for aviation safety, was outstandingly professional and a pleasure to work with on every occasion. This work was an example of a positive and constructive partnership between aviation authorities and industry. We need more of that approach.


[1] https://www.intlaviationstandards.org/Documents/CICTTStandardBriefing.pdf

[2] https://publicapps.caa.co.uk/docs/33/CASPEC14.PDF

[3] https://reports.aviation-safety.net/1992/19920120-0_A320_F-GGED.pdf

[4] https://www.invent.org/inductees/c-donald-bateman

[5] https://archive.seattletimes.com/archive/?date=19930125&slug=1681820

[6] https://aerospace.honeywell.com/us/en/pages/enhanced-ground-proximity-warning-system

[7] https://aviationweek.com/air-transport/safety-ops-regulation/don-bateman-father-terrain-awareness-warning-systems-dies-91

Ban

Some policies are directly targeted to fix a problem, other policies maybe aimed at indicating a direction of travel. I think the measures in France to ban domestic flights on short routes is the later.

Internal routes that can be flown in less than two-and-a-half hours, are prohibited[1]. That can be done because high-speed rail transport offers a means of connecting certain French cities.

The calculation being that greenhouse gas emissions will be reduced by this control. There had been many calls for even stricter restrictions on flying in France. Lowering carbon emissions is a priority for many European governments. Sovereignty is primary in this respect. A State can take measures to control domestic flying much more readily than they can internationally. Connecting flights will not be changed by this new legislation.

High-speed trains do take passengers from airlines and take cars off the roads. Where a mature rail network exists, there are significant benefits in focusing on rail transport between cities. Often rail and air are complementary, with major high-speed rail stations at airports.

Given the rhetoric surrounding the “climate emergency” these restrictions are a modest measure that will make only a small difference to carbon emissions. The symbolism is significant. It’s a drive in a transport policy direction that may go further in time and other States may do the same.

Flying between Paris and Lyon doesn’t make much sense when a good alternative is available. Flying between London and Birmingham doesn’t make much sense either. However, changes like these need to be data-driven transformations. There needs to be a measure reduction in greenhouse gas emissions because of their implementation. For example, displacing travellers onto the roads would be a negative outcome.

The imperative of greenhouse gas emission reduction means creative and new measure will happen. It’s far better for aviation to adapt to this framework of operations rather than push back. The direction of travel is set.


[1] https://www.bbc.co.uk/news/world-europe-65687665

Bad Moon

Despite climate change, economic downturns, war, and recovery from a pandemic no one was prepared for, this is a good time to be alive. We are a long way from the end of days. Or at least I hope we are.

The past is another country. Only that can be said of the future too. The difference is a record book. Behind us we have the chronicles, from the first written words to this next key I’m about to tap. In front of us spreads a great deal of uncertainty.

What’s with the gloom and doom? Media of all kinds seems to bathe in a pool of pessimism. I can hear Creedence Clearwater Revival singing Bad Moon Rising[1] in the background. Despite climate change, economic downturns, war, and recovery from a pandemic no one was prepared for, this is a good time to be alive. We are a long way from the end of days. Or at least I hope we are.

In so far as fiction is concerned, I love a good dystopia. Unfortunately, some of the movies on this theme are quite ridiculous or dammed right annoying. The Day After Tomorrow[2] is a bucket load of piety and the remake of The Day the Earth Stood Still[3] has me throwing things at the TV.

Last night, I tried to get through the first half of a more recent movie called Reminiscence. It does amaze me that what must have seemed like such good ideas on paper can be transformed, at great expense, into a relatively average film. Yes, we are going to have to cope with rising sea levels and it will change the way people live.

What I’m addressing is the assertion made by a journalist who covers the cultural effect of science and technology[4]. It’s basically, that all this focus on the end of the world stuff stops us from planning a positive future. I can quite understand the basis for such a proposition.

Dare I make a HHGTTG reference? Well, I’m going to anyway[5]. It’s that society collapses if we spend all day looking at our feet, or to be more precise our shoes. Looking down all the time is equated with being depressed about the future. That leads to people buying more colourful shoes to cheer themselves up. Eventually, that process gets out of control and civilization collapses.

For someone like me who has spent a lot of time looking at accidents and incidents in the aviation world, I’m not on-side with the notion that bad news leads to gloominess and then immobility. I guess it does for some people. For me, it’s almost the reverse.

What we learn from disasters and calamities is of great benefit. It stops us from making the same mistakes time and time again. Now, I know that doesn’t last forever. Human memory is not like a machine recording. We are incredibly selective (hence films like Reminiscence).

In my mind, none of this persistent immersion in stories with bad outcomes stops us from planning. To be positive, it stops us taking our plans for what we can do into the realms of pure fantasy. Or at least it should.


[1] https://youtu.be/zUQiUFZ5RDw

[2] https://en.wikipedia.org/wiki/The_Day_After_Tomorrow

[3] https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still_(2008_film)

[4] https://www.newscientist.com/article/mg25834380-100-why-we-shouldnt-fill-our-minds-with-endless-tales-of-dystopia

[5] https://hitchhikers.fandom.com/wiki/Shoe_Event_Horizon

Fake/Real?

So, why might artificial intelligence (AI) be so dangerous in a free society?

Democracy depends upon information being available to voters. Ideally, this would be legal, decent, and honest information. All too often the letter of the law may be followed whilst shaping a message to maximise its appeal to potential supporters. Is it honest to leave out chunks of embarrassing information for the one nugget that makes a politician look good? We make our own judgement on that one. We make a judgement assuming that outright lying is a rare case.

During key elections news can travel fast and seemingly small events can be telescoped into major debacles. I’m reminded of the remark made by Prime Minister Gordon Brown[1] when he thought the media’s microphones were dead. In 2010, when an aide asked: What did she say? Gordon Brown was candid in his reply. It’s an occasion when the honest thoughts of a PM on the campaign trail popped into the public domain and livened up that election coverage considerably.

What’s concerning about AI[2] is that, in the hands of a “bad actor,” such events could be faked[3] extremely convincingly. Since the fast pace of election campaigning leaves never enough time for in-depth technical investigations there’s a chance that fake events can sway people before they are uncovered. The time between occurrence and discovery need only be a few days. Deep fakes are moving from amateur student pranks to the tools of propagandists.

Misinformation happens now, you might say. Well, yes it does, and we do need people to fact-check claims and counter claims on a regular basis. However, we still depend on simple techniques, like a reporter or member of the public asking a question. It’s rather a basic in format.

This leaves the door open for AI to be used to produce compelling fakes. Sometimes, all it needs is to inject or eliminate one word from a recording or live event. The accuracy and speed of complex algorithms to provide seamless continuity is new. It can be said that we are a cynical lot. For all the protest of fakery that a politician may make after an exposure there will be a plenty of people who will not accept any subsequent debunking.

My example is but a simple one. There’s a whole plethora of possibilities when convincing fake pictures, audio and videos are only a couple of keyboard stokes away.

Regulatory intervention by lawmakers may not be easy but it does need some attention. In terms of printed media, that is election leaflets there are strict rules. Same with party political broadcasts.

Being realistic about the risks posed by technology is not to shut it down altogether. No, let’s accept that it will become part of our lives. At the same time, using that technology for corrupt purposes obviously needs to be stamped on. Regulatory intervention is a useful way of addressing heightened risks. Some of our 19th century assumptions about democracy need a shake-up. 


[1] https://www.independent.co.uk/news/uk/politics/bigotgate-gordon-brown-anniversary-gillian-duffy-transcript-full-read-1957274.html

[2] https://edition.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html

[3] https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html

Engineering

I know this is not a new issue to raise but it is enduring. Years go by and nothing much changes. One of the reasons that “engineering” is poorly represented in the UK is that its voice is fragmented.

I could do a simple vox pop. Knock on a random door and ask – who speaks for engineers in the UK. The likelihood is that few would give an answer, let alone name an organisation. If I asked who speaks for doctors, those in the know would say the BMA[1]. If I asked who speaks for lawyers, most would answer the law society[2]. I dare not ask who represents accountants.

Professional engineering institution have an important role. That’s nice and easy to say, in-fact all the ones that are extant do say so. Supporting professional development is key to increasing access to engineering jobs. It’s spokespersons, specialist groups and networking opportunities can provide visibility of the opportunities in the profession.

So, why are there so many different voices? There’s a great deal of legacy. An inheritance from bygone eras. I see lots of overlap in the aviation and aerospace industries. There’re invitations in my in-box to events driven by IET[3], IMECHE, Royal Aero Society and various manufacturing, software, safety, and reliability organisations.

The variety of activities may serve specialist niches, but the overall effect is to dilute the impact the engineering community has on our society. Ever present change means that new specialist activities are arising all the time. It’s better to adapt and include these within existing technical institutions rather than invent new ones.

What’s the solution? There have been amalgamations in the past. Certainly, where there are significant overlaps between organisations then amalgamation maybe the best way forward.

There’s the case for sharing facilities. Having separate multiple technical libraries seems strange in the age of the connected device. Even sharing buildings needs to be explored.

Joint activities do happen but not to the extent that could fully exploit the opportunities that exits.

If the UK wishes to increase the number of competent engineers, it’s got to re-think the proliferation of different institutions, societies, associations, groupings, and licencing bodies.  

To elevate the professional status of engineering in our society we need organisations that have the scale and range to communicate and represent at all levels. Having said the above, I’m not hopeful of change. Too many vested interests are wedded to the status-quo. We have both the benefits of our Victorian past and the milestone of that grand legacy. 


[1] https://www.bma.org.uk/

[2] https://www.lawsociety.org.uk/en

[3] http://www.theiet.org/

Deregulation

There’s nothing wrong with making an argument for deregulation. What’s absurd is to make that argument as an unchallengeable dogma. It’s the irrationality of saying that deregulation is good, and regulation is bad, de-facto. This kind of unintelligent nonsense does permeate a particular type of right-wing political thinking. It pops it’s head up in a lot of Brexiters utterances. For advocates of Brexit their great goal is to throw away rules and lower standards. Mostly, this is for financial gain.

Let’s take some simple examples. The reasons for rules and regulations can often be found in recent history. Hazards are recognised and action is taken.

There’s still lead paint to be found in many older houses. There was a time when such paint was used on children’s toys. Toy safety has been a confusing area of law, and there have been several sets of regulations since the 1960. From our current perspective this past laxness seems insane, but such lead paint mixtures were commonplace. In fact, all sorts of toxic chemicals have been used in widely used paints.

I remember working in one factory building where a survey was done of the surrounding grounds. Outside certain windows there were small fluorescent flags placed at in the grass verges. They marked places where minor amounts of radiation had been detected. This came from discarded paint brushes and tins that had accumulated in the war years. At that time radioactive luminescent paint was used to paint aircraft instrument dials.

Any arguments for the deregulation of toxic chemicals in commonly used paints should be one that is quashed instantly. However, some deregulation fanatics are only to happy to endorse a loosening of the rules that protect the public from toxic chemicals.

One result of the loosening of public protection is often to put greater profits in the hands of unscrupulous industrialist. Across the globe there are numerous cases studies of this sad folly. Newspapers and political parties that push the line that rules, regulations and regulators, by their very nature are crushing our freedoms are as bad as those unscrupulous industrialists.

Yes, there’s a case to be made for pushing back over-regulation. There’s risks we are prepared to take where the risks are low, and the benefits are large. This is a matter for intelligent debate and not throwing around mindless slogans. We should not be cowed by loud voices from small corners of society intent on tearing down decades of learning and sound practical laws. I could come up with an encyclopaedic list of examples. Opponents rarely, if ever want to address a particular case since it’s much easier for them to thunder off sweeping assertions. Beware these siren voices.

NOTE: The Toys (Safety) Regulations 2011 implemented the requirements of Directive 2009/48/EC, whose purpose is to ensure a high level of toy safety.

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Going backwards

I find it difficult to believe anyone who gets into their sixties and says that they have never had an accident. My latest isn’t original or without minor consequence. Yesterday morning a kind nurse gave me a bandage to hold two fingers straight.

At the start of the month my gardening efforts amounted to emptying pots, replanting pots, and moving pots. I’ve got far too many pots. Plants that had not survived the winter freeze were unceremoniously sent to the compost heap. Plants that looked like the spring was bringing them back to life were given a bit of pampering.

Sitting in the shade, one large square container held a small fir tree. The tree wasn’t in the best of health but remained well worth saving. What I was unhappy about was its position on the patio out the back of the house. So, it occurred to me that it was logical to move the container to a spot where the tree might flourish in future. The large square container was made of fiberglass but had the appearance of grey slate. It had been standing unhindered in one place for well over a year.

Now, you would think an engineer, like me, would know something about friction. Or in this case stiction, that is the friction that tends to stop stationary surfaces from easily moving. There are more than two ways I could have attempted to move this heavy garden container. One was to push and the other was to pull. I opted to pull and that was my big mistake.

I crouched down and with both hands pulled hard. The container was stubborn. Again, I pulled hard. Then without warning the side of the container gave way, and I went flying. Afterwards, I wish I had paid attention to what was behind me. I hit the ground awkwardly.

When adverse events like this happen, it’s as if time momentarily slows down. Naturally, it doesn’t but it feels like an out of body experience when there’s nothing you can do to stop the inevitable happening. That split second ended with me lying on my right side with my hand extended two steps down on the patio steps. My fall was broken by my backside and my right-hand middle finger.

Oh dear, this is going to hurt – that was my first thought as I lay on the hard ground looking back at the roots of the tree I was trying to save. Second thought was – why did I do that?

Knowledge with hindsight can be a universal blight. Sure, I wouldn’t have done what I did if I’d taken the time to think more deeply about all the possible consequences linked to moving heavy objects. In this case there was only me siting on the ground painfully recounting what happened. No one to say – are you crazy? You shouldn’t have pulled that part of that container.

Accidents are a part of life. Better they be minor. Better we learn from them every time.

Working hard for the money

What goes wrong with research spending? It’s a good question to ask. In some ways research spending is like advertising spending. “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.[1]” Globally billions are spent on advertising so you might say – it must be working. In fact, far more is spent on advertising than is ever available for research in the aviation and aerospace world.

Research spending is a precious asset because of its bounds. Even so, a great deal of research spending is lost on activities that deliver no or little benefit. It’s true Governments, institutions and industry don’t often put-up funds for vague and imprecise aspirations or outlandish predictions but nevertheless money goes down a sink hole on far too many occasions.

A reluctance to take tough decisions or at the other extreme of the spectrum a relish in disruption plagues research funding decision making. Bad projects can live long lives and good projects get shut down before their time. My observations are that these are some of the cases that crop-up all too often across the world.

Continuing to service infrastructure that cost a great deal to set-up. It’s the classic problem of having spent large sums of money on something and thereby the desperation to see a benefit encourages more spending. Nobody likes to admit defeat or that their original predictions were way off the mark.

Circles of virtue are difficult to address. For example, everyone wants to see a more efficient and sustainable use of valuable airspace therefore critics of spending towards that objective are not heard. That is even if substantial spending is misdirected or hopelessly optimistic.

Glamourous and sexy subjects, often in the public limelight, get a leg-up when it come to the evaluation of potential research projects. Politicians love press photographs that associate them with something that looks like a solution in the public mind. Academics are no different in that respect.

Behold unto the gurus! There’s conferences and symposiums where ideas are hammered home by persuasive speakers and charismatic thinkers. Amongst these forums there are innovative ideas but also those that get more consideration than they warrant.

Narrow focused recommendations can distort funding decision making. With the best of intent an investigation or study group might highlight a deficiency that needs work, but it sits in a distinct niche of interest. It can be a push in direction the opposite of a Pareto analysis[2].

Highlighting these points is easier than fixing the underlying problems. It’s a good start to be aware of them before pen and ink meets, and a contract is signed.


[1] statement on advertising, credited to both John Wanamaker (1838-1922) and Lord Leverhulme (1851-1925).

[2] https://asq.org/quality-resources/pareto