It’s been a huge 12 months for data and models. From the emergence of generative artificial intelligence (GenAI) tools such as ChatGPT in late 2022 to the ever-increasing reliance on machine learning tools and analytics more generally, companies that don’t have a tight grip on their data risk being left behind.
To help ensure risks are reduced and rewards are reaped, some enterprises are employing high-level executives to manage their complex AI and algorithmic requirements. One such leader is Carter Cousineau, who is vice-president of data and model governance at news and information provider Thomson Reuters.
Cousineau joined the firm in September 2021. She previously collected a broad range of public and private sector experiences, including being the managing director of the Center for Advancing Responsible and Ethical Artificial Intelligence at the University of Guelph.
Her research interests have crossed a range of topics from human-to-computer interactions and on to trustable AI. She has also worked with technology startups, not-for-profit organisations, smaller businesses and Fortune 500 companies. Her aim – both at Thomson Reuters and more widely – is to develop a safe and secure approach to data use.
“I’m very passionate about ensuring we do things in an ethical and responsible manner, especially around technology,” she says.
“There are complexities in data and models across any organisation. One of my personal visions is I don’t see why we couldn’t get the appropriate controls to help with responsible use and ethics in these data and models. So, that’s something we work very closely on with all the different teams here.”
Building the right kind of culture
Cousineau was attracted to Thomson Reuters because of its blend of corporate opportunities and research challenges.
“While it’s a large, global company, it also has labs, which are research focused. It’s a firm that has a strong research and development practice built-in, which was something I wanted to be a part of organisationally,” she says.
“My experience blended well with some of the things the company was looking to do and that it was looking to expand internally. It’s been fun to put some of the research I’ve worked on into practice.”
Cousineau says most organisations have someone responsible for data model governance, particularly finance firms, which must have robust AI practices in place because they’re heavily regulated. More generally, the level of seniority of the person responsible for governance depends on the business environment within which they’re operating.
“My experience blended well with some of the things the company was looking to do. It’s been fun to put some of the research I’ve worked on into practice”
Carter Cousineau, Thomson Reuters
“It’s great to go into an organisation and build the approach and put your own stamp into the organisation and see the change across the company,” she says.
“That’s different to being at a university, where you work on research projects and different initiatives. It’s been exciting for me to go into a corporation and to think about how we can instil influence and change the culture to help drive trust.”
Cousineau says her role looks across the entire enterprise. Her global team, which includes professionals in Canada, Switzerland, India, the UK and the US, covers the full data lifecycle at Thomson Reuters from the collection of data to the retiring of a model.
“We support every business function, whether you’re in people, marketing, finance or product,” she says. “Our work covers everything from the moment you’re creating the data or a model, all the way through to using data or models, and on to decommissioning them.”
Her team ensures information and insight are used in a well-governed and ethical manner. Cousineau says the support of her team helped make the switch from academia to enterprise a straightforward one.
“With any new role, there are people you’re inheriting,” she says. “But it’s a great team with a global footprint. The people are very talented and they’re all willing and ready to improve some of the things we’re doing as a business.”
Establishing foundations for ethical AI
Cousineau says a key part of the work she’s undertaking at Thomson Reuters involves building the foundational elements for effective data governance.
“That’s anything around applying policies and standards, and then moving those approaches into action, which involves the implementation of any controls and tools that can help, support and validate the work we’re doing in practice,” she says.
“Building that strategy around governance and ethics was the first piece of work I was involved in at the company.”
Cousineau says those foundations are now in place. As part of this effort, the company is using Snowflake technology to allow staff to find the insights they need and to create a cloud-based platform for long-term innovation. All enterprise information goes into the Snowflake Data Cloud and is stored in what Thomson Reuters calls its Data Platform.
Carter Cousineau, Thomson Reuters
As well as embracing cloud services, the fast pace of change in technology – particularly in a fast-moving area such as generative AI – means policies and standards continue to be refined. With the building blocks for data management in place, she now ensures people across the business understand what good governance means on a day-to-day basis.
“That effort takes up a sizeable amount of my team’s time,” she says. “We’re working across each business function to ensure the right approach is in place. That’s all about driving cultural change and helping to influence people.”
Cousineau says her team has a strong awareness of all the various workflows of people across the organisation. They’ve used this knowledge to ensure that the data strategy they create is suitable for the tasks that people fulfil.
“My approach to governance and ethics was not to build different frameworks and tools that wouldn’t be able to fit into everyone’s everyday workflows. These workflows differ greatly around the business. The way finance, for example, uses AI machine learning models is very different than product or sales,” she says.
“We spent a lot of time understanding the workflows. The last thing I want to do is to make data scientists, model developers and product owners have another list of things to do. If you can make governance and ethics part of their workflows automatically, it becomes a lot easier – and we’ve done that.”
Preparing for the long-term impact of generative AI
Cousineau says most of her key priorities for the next 24 months are related to regulation and implementation. One of the big issues is laws that could be enacted to help organisations cope with the fast-moving world of generative AI.
“There are a lot more rules pending globally,” she says. “There is some ethical AI regulation already, but there’s more to come.”
Cousineau points in particular to the European Union’s (EU) General Data Protection Regulation (GDPR), but also the EU AI Act and other legislation that is being enacted in Canada and individual US states.
Carter Cousineau, Thomson Reuters
“We took all those regulations and built our strategy so when more regulation comes around, we’re ready,” she says. “The main focus over the next 18 months will be ensuring we have the right checks and balances that allow us to foster innovation because we’re constantly building new models with new data.”
She gives an example of how those models and data are used by the firm’s clients: “One of our core products is Westlaw, which is a case law database, and it has built-in litigation tools and robust legal research tied to it. The legal professionals can build custom alerts to the information they need and that’s a capability of the tool.”
In whatever way these data models are used at Thomson Reuters, Cousineau and her team ensure responsible AI is practiced across all use cases. She explains how this groundwork has proven to be crucial as new openings for GenAI have emerged. The expectation, she says, is that all use cases go through a data impact assessment.
As someone whose career has been based around the safe exploitation of information and insight, Cousineau is interested to see the next stage of developments around GenAI. She says the general tone should be one of cautious excitement – don’t rely too much on the technology, even if you think it has all the answers.
“I think there are certainly going to be ways that people can improve their ways of working, but everyone also needs to be careful in trusting information. A case in point is hallucinations: it’s a nicer way of saying the machine made an error,” she says.
“But if you’re using generative AI in an environment where the human-in-the-loop aspect is there, and you’re still reviewing content before it goes somewhere, there’s room to boost efficiency. Things that might have taken a few hours before could take a much shorter amount of time in the future.”