AI: Update on the State-of-the-Art
AI & Bias
Everyone loves the garbage in, garbage out saying when it relates to programming or operational designs. Nowhere is this a truer perspective than when using Artificial Intelligence (AI). As you train your technology on use cases and data, what you use and how you teach the AI will matter a great deal when it's set into the wild on its own.
I know of no cases of someone consciously programming AI with their own bias or known bias against certain peoples or situations. Yet, a bank uses an AI technology for a loan system, and it shows upon audit that it does bias against a class of people. Maybe it was trained to use zip codes to indicate the value of a property over the future, and that introduced a bias to not provide a loan to anyone in that zip code or to limit loans severely. Boston has many examples of areas that represent one type of person vs. another. Again, I am sure it was not intentional but rather the result of trying to add intelligence to a problem set and found bias on its own.
This tells us how important it is to have a process of checking the assumptions. There is a great need for quality control periodically as it may not have this issue at the first rollout but “learn” it as it gets more and more data. Guarding against these is critical to the fairness and equality that is so needed in the world.
If you know of any startups who are tackling this problem, introduce us. This seems a perfect place for great AI entrepreneurs to fix an emerging problem. Always happy to learn about a solution at MCasady@VestigoVentures.com.
— Mark & Dave
AI Webinar
Technological Developments and Commercial Opportunities in Artifical Intelligence
w/ Dave Blundin, GP at Vestigo Ventures, and Ramesh Raskar, Associate Professor at MIT
European Union Acts on AI
There are few unhackneyed statements to make about the power and risks of wide-spread commercial applications of machine learning (ML). The American zeitgeist remains occupied with the impact of algorithmic content selection on social media platforms like Twitter and Facebook. The popular narrative is that in an effort to increase user-activity or for more nefarious purposes, these companies’ algorithms amplify fringe perspectives and shocking content producing dangerous political divisiveness among regular users. This has raised seemingly existential questions about free speech and the need to regulate these technologies. It is difficult not to feel that these are some of the most critical issues facing open societies today and lack an easy solution.
The European Commission recently proposed new regulations on AI in which they stated four objectives:
- Ensure that AI systems in the European market are safe and respect existing law on fundamental rights and EU values
- Ensure legal certainty to facilitate investment and innovation in AI
- Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems
- Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation
There is a consistent and pro-innovation theme throughout these objectives which is crucial to achieving any of these goals. Much like in the United States, the posture is pro-AI but within (hopefully) well-defined rules. European regulators are focused on how AI systems behave in the deployment environment and are commendable in their focus on creating accountability for violations of the safety and privacy of EU citizens.
It’s important not to lose sight of the many advantages we’ve received as a society from these technologies in healthcare, e-commerce, and financial services. We don’t get to choose which innovations we get to have and which side effects we wish to avoid ex-ante, nor can any regulatory framework. That said, in whatever form it ultimately takes, companies that place transparency at the forefront of their AI practices will be well placed to win enduring trust from their customers, something American technology companies have utterly failed to do. Leaders within those organizations should read the message from Europe loud and clear “If you don’t police yourselves, someone else will.”
We would love to hear your thoughts or concerns about the advancements in AI systems at MCasady@VestigoVentures.com and FAnderson@VestigoVentures.com.
— Mark & Frazer
Portfolio Updates
Railz Raises $12 Million Series A Round
We are extremely excited to announce that Railz has closed their Series A funding at $12M led by Nyca Partners. Railz has been an amazing company to work with, and we look forward to their continued success.
Alloy Named Banking Tech of the Year
We are ecstatic to see that the US FinTech awards have named Alloy the winners of their Banking Tech of the Year.
John Wernz Joins LifeYield's Advisory Board
As proud investors in LifeYield, we are thrilled to see marketing leader John Wernz join their Advisory Board.
Interesting Reads
Apply to One of Our Portfolio Companies!
Our mailing address is:
Vestigo Ventures
1 Kendall Sq Ste B2101
Cambridge, MA 02139-1588
Add us to your address book
DISCLAIMER: The information presented in this newsletter is intended for general informational purposes only and may not reflect current law or regulations in your jurisdiction. By reading our newsletter, you understand that no information contained herein should be construed as legal, financial, or tax advice from the authors or contributors, nor is it intended to be a substitute for such counsel on any subject matter. No reader of this newsletter should act or refrain from acting based on any information included in, or accessible through, this newsletter without seeking appropriate professional advice on the specific facts and circumstances at issue from a professional licensed in the reader's state, country, or other appropriate licensing jurisdiction. This newsletter and its content should not be considered a solicitation for investment in any way.