AI, space, defence essential secrecy and data – the industry opportunities
With the Morrison Government's announcement last June of a commitment to spend $270 billion over the next decade on defence capabilities, it's an exciting and fruitful time for partnerships between Defence and industry.
Some of the significant, long-term industry opportunities for those of us who work in the AI, Space, Defence Essential Secrecy and Data spaces include:
- the current data and AI landscape;
- the opportunities and challenges; and
- a forward look into what's next in the industry.
Current landscape: Data explosion stats and facts
In this era of digital transformation, we are experiencing what can only be described as a "data explosion".
We are almost doubling the world's data every two years.
A zettabyte is 1 trillion Gigabytes. The Datasphere will increase from 33 zettabytes in 2018 to approximately 175 zettabytes by 2025.
175 zettabytes stored on 1.2mm thickness DVDs would wrap around the Earth 222 times.
It's as if a country went from two warships in 2010 to a fleet of more than 600 warships within 20 years (potentially larger than the size of the current US Navy fleet).
And they are creating an enormous amount of unstructured data such as videos, social media, geolocation and so forth being generated, especially from our mobile devices. In the last minute there were:
- 480 thousand tweets;
- 4.7 million YouTube videos viewed;
- 400 new users joined Facebook;
- 60,000 new images on Instagram;
- 200 million emails sent; and
- 4.2 million search queries on Google.
Governments and corporations have adapted by becoming more data-savvy, and have moved, or looking to move, to deploying innovative Software as a service (SaaS) offerings.
One example is Clayton Utz. It is a law firm, yet I – a non-lawyer – joined the Partnership four years ago to start and head up the Forensic and Technology Services team, so we could help to solve complex client problems through the sophisticated use and analysis of data.
Current landscape: AI
Gone are the days where you went to the library or opened up an encyclopaedia if you wanted to know something. The answers are in your pocket, although they might be hidden in the deluge of data we create every day.
That's where automation, AI and continuous active learning come in to make sense of it all and lead to good business decisions.
Last year McKinsey surveyed over two thousand participants across a range of industries and company sizes:
- 50% of respondents reported that their companies have adopted AI in at least one business function;
- AI is most commonly adopted in product and service development, service operations and marketing; and
- 22% of respondents believed that AI has attributed to at least 5% of their organisation's EBIT.
They identified some risks, such as:
- explaining how AI models come to their decisions;
- personal privacy;
- regulatory compliance; and
- cybersecurity.
Two takeouts from the survey are particularly interesting.
First, AI is not faultless. Some models have worked badly as they were trained using pre-COVID-19 economic and consumer behaviours. And even without the excuse of a pandemic, they can lead to unwanted results, such as Amazon's AI recruitment tools recommending only men.
Secondly, cyber-attacks have risen over the last 12 months, a risk flagged the most by the respondents.
This leads to my next point.
AI generates new types of attack, and new actors. For example, we've had our first "Deepfake" voice spoofing attacks where attackers have mimicked someone's identity to divert funds.
In Defence and cyber scenarios we often talk of state actors as being the most significant players in global sovereignty and security. What's both interesting and frightening about AI is the asymmetry of effect that it offers; it allows non-state and big-tech actors to achieve a level of effect that 25 years ago would have only been available to well-resourced state level actors working with several floors of analysts.
AI's opportunities vs challenges for Government
The United States just released its National Security Commission on Artificial Intelligence; over 750 pages, highlighting some of the challenges for industry.
It stresses working with fellow democracies and the private sector to build privacy-protecting standards into AI technologies. Significantly, it also specifically calls out bureaucracy thwarting better partnerships with the private sector that could otherwise help – the government must be both a good customer and a willing partner.
Now, while the government can't compete with private sector salaries, it has some advantages over the private sector, which is burdened by policy and red tape considerations as well as potential legal and perception risks.
The future will need Government to care taking of the public's best interests, balancing needs such as privacy against that for advancement on a national scale to keep pace with growing AI usage.
AI opportunities vs challenges for industry
Much like it has been with accounting and digital security standards, it will be the private sector that will be tasked with testing and auditing the automation of the future, and we need to be enabled to do so.
And along with that is the commercial value of the data out there, which is constantly growing as individuals trade data privacy – or secrecy – at an unprecedented rate for convenience
AI driven advertising builds profiles on what to market us based on behavioural analytics; and that data, such as
- photos and videos all across social media;
- our location;
- our contact details, passwords, phone numbers and mother's maiden names.
The AI revolution is here and it is fuelled on data. The more private, the more valuable. And all that data is vulnerable to being pushed into more and more databases, often to be leaked and harvested and recycled by whoever finds it.
The risks and rewards of regulation
There will be temptations to immediately limit the use of AI tools, or potentially make knee-jerk decisions. Some countries are already limiting the use of AI and biometrics, but we need to tread carefully.
We can and should establish regulations and techniques, processes, and procedures just as we have done for other emerging technologies and concerns.
We must acknowledge AI-related security challenges are inevitable, particularly given that:
- Criminals and hostile state actors won't necessarily follow regulation
- Corporations and the government will absolutely have to
- And an AI will execute incorrect instructions at a speed no human can stop.
But business leaders and policy makers embracing and adopting AI, and increased AI literacy, will allow for investment and continued advancement of research and innovation that will help to address these challenges.
What's next
So what's next in the industry? I'm going to share with you some quickfire points of my observations, predictions and advice for the future:
- AI is already enabling large-scale social engineering. This will continue.
- Robots will be able to perform social engineering attacks and payment diversion fraud end-to-end far more efficiently than a human ever could, and in real time.
- There are already more robots than people on the internet; attackers may be able to repurpose these robots into a collective AI if they are vulnerable.
- AI defence measures will be the only defence against AI attackers both to make a speedy response and to warn those who are vulnerable.
- And to be ready for this, we must make sure that the Australian government has access to the best teams.
- The right incentives are deployed for both private and public sectors to work together in shared spaces for training, datasets, and advancement.
- Cyber security needs to be at the forefront of advancement.
- The OAIC and other associated bodies should gather information on AI-based threats.
We need to support the private sector in sharing threat intelligence and findings.
Conclusion
Data is undoubtedly one of the most valuable assets of any modern organisation and Government. We're now at a stage where most of our data is already out of the bag, and we're trying to decide what happens next. The rapid and (hopefully) responsible advancement of AI will shape how data is used or misused in our murky future; a future of big data, big-tech, big wins, and big losses.
As Governments and corporations become more data-savvy, opportunities and challenges will push the industry to innovate. Combined with the substantial investment in Defence capabilities, the next ten years could show more human progress than the last 100. Almost every industry will be reinvented. It's an exciting time to do the work we do.
This is a speech given at the WAEDI Defence Industry Forum Breakfast, Perth, March 2021