Artificial Intelligence (AI) is permeating various sectors, leading to disruptive changes in traditional practices and systems. The application of AI in early-stage investment signifies a significant paradigm shift, presenting a dichotomy of opportunities and challenges. It replaces traditional 'gut-feel' decisions with data-driven insights. However, the debate surrounding AI vacillates between it being a solution to the world's problems and a harbinger of endless difficulties. With advancements in AI, the landscape of early-stage investment is expected to undergo significant transformations. In this context, we aim to explore the impact of AI on early-stage investments, the ethical considerations involved, and the future of regulation.
The Past: A human-centric approach
Historically, the early-stage investment sector has been dominated by human intuition, industry experience and judgment. Here, a handful of highly experienced venture capitalists manage substantial portfolios, make critical investment decisions based on their years of experience, industry knowledge, and intuition. In this context, the role of AI has been minimal, with less than 5% of venture capital decisions involving AI in 2021 according to Gartner Inc.
Investment decisions, especially for early-stage ventures, have been relying heavily on assessing the startup team's competence, the viability of the business model, and market potential. It involves processes such as thorough due diligence, including testing of a minimal viable product (MVP) with potential customers, to ensure it is good enough to justify further investment. However, with AI, validation of ideas can be done much faster and with much less expense.
The age old process is often time-consuming and requires the investors to sift through vast amounts of data. Although successful to a degree, this approach is also prone to human errors and biases, as in any other field.
The Present: AI empowerment
VCs are already leveraging machine learning tools to evaluate investment opportunities, significantly reducing the decision-making time. AI platforms like EQT Ventures' 'Motherbrain' have begun to rank investment opportunities based on various factors, such as predictive analytics to identify successful startups, enabling investment professionals to focus on the highest-ranking prospects first.
Veronica Wu, the founding member of Hone Capital, an offshoot of one of China's biggest venture capital entities, developed a machine learning model to enhance her investment decision-making. She shared that an impressive 40% of the businesses suggested for investment within 15 months by this model proceeded to secure subsequent funding rounds. This achievement surpassed the industry norm by a factor of 2.5 times.
Additionally, certain entities claim to have engineered intelligent undisclosed applications for the sphere of venture capital. These include InnoRate, an enterprise financially supported by the European Union, which offers an assessment of the opportunities and challenges associated with small and medium-sized enterprises. Similarly, Raized.ai is developing an investment intelligence system that leverages artificial intelligence with the aim to augment the efficiency, data-centric approach, and automation within the realm of venture capital. Researches at the University of St. Gallen built an investment algorithm to pinpoint the most promising investment opportunities and was then compared to the performance of of 255 angel investors. Their algorithm outperformed by 184% over the average returns made by human investors!
The Future: AI investing
AI is revolutionising the early-stage investment landscape. Gartner Inc. forecasts that by 2025, AI will contribute to 75% of venture capital investment decisions. AI has started demonstrating its potential in enhancing the effectiveness and efficiency of the investment process.
The deployment of AI in early-stage investment brings several advantages. Firstly, the capacity of AI to process vast amounts of data far exceeds human abilities, augmenting the speed and accuracy of decision-making processes. AI can also identify patterns and correlations in complex data sets, enhancing the prediction of potential outcomes.This improved prediction capacity can assist investors in recognising high-quality startups with higher chances of securing follow-on funding.
Moreover, AI offers an objective layer to investment decisions, transforming them into data-driven ones. The integration of AI can add an extra layer of objectivity to these decisions, improving their accuracy and reliability. By analysing extensive data and market trends, AI can also aid in diversifying an investor's portfolio by suggesting investments in sectors that might not be immediately apparent or familiar to the investor.
AI's contribution to increased profits is already evident in cases such as EQT Ventures' (a Swedish private equity company) investment of 6.1 million Euro ($6.62 million) in Peakon which is an employee engagement software. This deal was primarily identified through their AI platform. Such instances validate the efficiency of AI in improving investment outcomes.
Possible disadvantages and risks
Despite its potential, AI's integration into the early-stage investment landscape poses certain risks. An over-reliance on AI could potentially result in less investment in startups delivering breakthrough innovations, as AI tends to favour familiar business patterns.
Moreover, ethical and regulatory concerns around the transparency and accountability of AI algorithms, especially with the increasing involvement of Big Tech companies in managing financial data, require careful scrutiny. Ensuring that AI-powered decisions can be explained and understood by humans is vital to maintain trust and confidence in AI-enabled financial services.
Big Tech companies have a significant role in the financial services sector, particularly as gatekeepers of data and potential anti-competitive behaviours. This asymmetric data-sharing between Big Tech and financial services firms raises concerns. The potential risks to operational resilience in payments, retail services, and financial infrastructure are substantial. Likewise, the potential manipulation of consumer behavioural biases cannot be overlooked. It is important to address whether the unique and comprehensive datasets held by Big Tech firms - covering browsing data, biometrics, social media, and financial transaction data - could pose significant risks to market functioning and competition. AI's ability to mimic human intelligence raises numerous ethical questions, including concerns over economic stability, exploitation of consumer data, and privacy concerns associated with hyper-targeting. However, there is also a hesitance to embrace its benefits due to a fear of the unknown.
AI can cause market imbalances, influence price discovery, and affect transparency and fairness. Misinformation fuelled by social media and the use of generative AI can destabilise markets on a global scale, as evidenced by recent events involving AI-generated hoaxes such as the image of a government building near the pentagon which appeared to be billowing in black smoke which sent stocks tumbling, also popularly referred to as ‘the first AI generated image that moved markets’.
The rise of short-term, highly automated trading strategies further underscores the heightened volatility AI can introduce into the market. AI also introduces new dimensions to issues like fraud prevention, operational resilience, and cyber defence. AI's adoption necessitates an acceleration in investment in these areas.
AI’s "explainability," potential data bias, and the resulting implications is another topic for scrutiny. Safe and responsible AI adoption must be underpinned by high-quality data. Poor data quality can lead to poor algorithmic outcomes, highlighting the importance of data quality assessments to determine relevance, representation, and suitability.
Furthermore, the use of AI must be compliant with existing legislation, particularly in terms of data protection such as the Data Protection Act 2018 which is the UK’s implementation of the General Data Protection Regulation (GDPR).
These issues point to the need for a balanced approach in adopting AI, where it supplements human judgement rather than replacing it entirely.
Regulatory considerations
Regulators are developing frameworks to balance AI's potential with its risks. The FCA and Bank of England have published a joint AI Discussion Paper, focusing on whether AI can be managed through fine-tuning the existing regulatory framework or whether a new approach is needed. The paper also seeks to understand how best to support the safe and responsible adoption of AI.
Regulators worldwide are proactively preparing for an AI-driven future, focusing on fostering innovation while mitigating risk. Collaborative efforts such as the Global Financial Innovation Network and the UK Digital Regulation Cooperation Forum are aimed at managing the risks and opportunities associated with AI.
The role of governance in AI adoption is vital, particularly in managing AI bias, AI model risk, and ensuring responsibility in design, development, deployment, and evaluation of AI models. Leveraging existing frameworks like the FCA’s Senior Managers’ and Certification Regime (SMCR), regulators are aiming to address the practical challenges that AI governance must address such as reducing harm to consumers and strengthening market integrity. This appears to rely on making individuals more accountable for their conduct and competence. As part of this, the SM&CR aims to:
1.encourage staff to take personal responsibility for their actions
2.improve conduct at all levels
3.make sure firms and staff clearly understand and can show who does what.
AI's transformative impact on early-stage investment presents a future characterised by efficient, accurate, and data-driven investment landscapes. However, the key to success lies in intelligent adoption that balances AI's potential with its inherent risks. The key lies in adopting AI intelligently and responsibly, ensuring that it supplements human judgement rather than replacing it.
The future trajectory of early-stage investment will therefore be shaped not just by AI's innovative potential but also by the agility of regulatory bodies in managing its impacts. Through considered action, we can maximise the transformative potential of AI, ensuring market integrity, fostering competition, and protecting consumers in an AI-enabled investment world.
Sources:
1) WSJ. (n.d.). IT | Gartner Inc. Stock Price & News. [online] Available at: https://www.wsj.com/market-data/quotes/IT.
Council, J. (2021). VC Firms Have Long Backed AI. Now, They Are Using It. Wall Street Journal. [online] 25 Mar. Available at: https://www.wsj.com/articles/vc-firms-have-long-backed-ai-now-they-are-using-it-11616670000.
2) Technology and Operations Management. (n.d.). Venture Capital: Rumors of my death have been greatly exaggerated. [online] Available at: https://d3.harvard.edu/platform-rctom/submission/venture-capital-rumors-of-my-death-have-been-greatly-exaggerated/
3) BRINK – Conversations and Insights on Global Business. (n.d.). How AI Is Transforming Venture Capital. [online] Available at: https://www.brinknews.com/how-ai-is-transforming-venture-capital/.
4) Gartner. (n.d.). Gartner Says Tech Investors Will Prioritize Data Science and Artificial Intelligence Above ‘Gut Feel’ for Investment Decisions By 2025. [online] Available at: https://www.gartner.com/en/newsroom/press-releases/2021-03-10-gartner-says-tech-investors-will-prioritize-data-science-and-artificial-intelligence-above-gut-feel-for-investment-decisions-by-20250.
5) Sorkin, A.R., Warner, B., Kessler, S., Merced, M.J. de la, Hirsch, L. and Livni, E. (2023). An A.I.-Generated Spoof Rattles the Markets. The New York Times. [online] 23 May. Available at: https://www.nytimes.com/2023/05/23/business/ai-picture-stock-market.html.
6) “DP22/4: Artificial Intelligence.” 2022. FCA. October 10, 2022. https://www.fca.org.uk/publications/discussion-papers/dp22-4-artificial-intelligence.
(Link- https://www.fca.org.uk/publications/discussion-papers/dp22-4-artificial-intelligence)
7)https://www.fca.org.uk/firms/senior-managers-certification-regime