AI has completely redefined the way we work, with teams increasingly implementing the technology to make workflows more efficient and easier to manage.
Over the last 12 months, European companies building AI agents were among the most active fundraisers, with 457 deals closed, according to Sifted data. Startups in the sector are consistently working on ways to automate processes across departments such as sales and marketing, customer service and financial operations.
But as AI transforms ways of working, the future of work will depend on teams’ ability to combine the technology with human judgement and social responsibility, according to David López, associate dean of Esade Business School’s Full Time MBA.
“Leaders now need to decide how AI is used, when it should be trusted and where human judgment must intervene,” he says.
“At Esade, AI is never taught in isolation from its consequences. Every discussion about technology is paired with questions about stakeholders, incentives and long-term effects on society and the environment,” he adds. “Our goal is to shape future leaders and entrepreneurs who internalise responsibility.”
Esade Business & Law School raises awareness around how technology and sustainability can work hand-in-hand through undergraduate and tailored Masters programmes, as well as hosted panels at events such as 4YFN, a startup forum within the Mobile World Congress (MWC).
At this year’s 4YFN, Esade, who is an academic partner of the event, has centred its debates and sessions on ‘AI for meaningful impact’, with the aim of placing artificial intelligence at the service of society and the common good.
How can AI be used meaningfully
“AI should be a tool to improve working conditions, not a mechanism to maximise output at the expense of people,” says Irene Unceta, professor of data, analytics, technology and AI at Esade and host of a panel at 4YFN. “We need critical oversight and a healthy dose of technical skepticism. If AI reinforces the need for a human behind the decision, we need people who understand what AI can do and how to leverage its full potential, but who are also acutely aware of its limitations.
There is a growing gap between technical capability and managerial understanding.
“This means knowing when to question a recommendation, when to override it and when not to deploy AI at all.”
Many organisations implement AI into work without fully understanding bias, transparency or who is responsible when something goes wrong, adds López.
“There is a growing gap between technical capability and managerial understanding. Leaders may rely on AI outputs without fully grasping their limitations, underlying assumptions or potential societal impact,” he says.
If your technology system shapes what people see, feel, and believe, that inherently shapes a social environment.
Human judgement becomes all the more critical when implementing these systems, he adds, with leaders ensuring they have framed questions correctly, interpret outputs in context and embed human values, ethics and accountability into final decisions.
Technology is not only impacting our relationships to others, but also to ourselves and the world around us, adds Alison Lee, chief R&D officer at The Rithm Project, a non-profit organisation working to foster connection and community in the age of AI.
“We’ve seen data on how social relationships, loneliness and belonging impact wellbeing. If your technology system shapes what people see, feel, and believe, that inherently shapes a social environment,” she says. “Founders and technologists have responsibilities to prevent foreseeable harm, especially for minors and vulnerable users and to be honest about what the system is and should be used for.”
Companies ought to audit impacts of AI systems across different communities and not just the median user, Lee says. Founders and technologists should also commit to a North Star for the kind of impact they want to see with AI such as leaving users more connected, informed and capable than when they first arrived.

Nurturing responsible future leaders
Fostering future entrepreneurs who understand how to combine technology with sustainable and meaningful impact is a key part of Esade’s mission. This humanistic approach to technological innovation is embedded across its academic portfolio, where AI is framed not as a purely technical discipline but as a strategic leadership capability.
Our goal is that students learn to distinguish between innovation that genuinely improves working and living conditions, and innovation that simply extracts more value from people.
Business programmes at Esade integrate AI into broader discussions around governance, decision-making and long-term value creation. In 2021, Esade also launched a double degree in Business Administration & Business and Artificial Intelligence, designed to prepare future leaders to integrate AI into strategic decisions while also maintaining critical judgment and a clear sense of societal impact.
Esade supports students in understanding how AI can be used strategically but also how its impact is determined by who designs it and who deploys it, says Unceta.
“We encourage students to assess not just whether AI can solve a specific problem, but whether it should. Our goal is that students learn to distinguish between innovation that genuinely improves working and living conditions, and innovation that simply extracts more value from people,” she says. “Businesses don’t exist in a vacuum; they operate in societies and have responsibilities that extend beyond the balance sheet.”
Students at Esade are also encouraged to engage with real-world cases and make strategic decisions in scenarios where efficiency, inclusion, sustainability and trust are in tension, adds López. The school is strongly committed to supporting future entrepreneurs in understanding the responsibility they have when using and deploying AI.
“Our approach integrates AI, ethics and sustainability through four pillars,” he says. “First, algorithmic literacy. Leaders do not need to code, but they must understand how algorithms work, where biases can emerge and what their limitations are. Second, strategic application across business areas. AI only creates value when it is applied strategically.”
"The decisions we make now will shape not only business models, but also social and environmental outcomes for years to come."
“Third, ethical and responsible governance. We place strong emphasis on accountability, transparency, and societal impact. Leaders must understand not only what AI can do, but what it should do. Fourth, adoption, corporate culture, people and strategy. The success of AI depends less on technology than on organisations.”
Lee, who has previously been a panellist at 4YFN, is working to prioritise human connection and belonging even with the rise of AI. “We do mixed-methods research to understand how AI and algorithmic systems are shaping relationships, identity, belonging and mental health,” she says.
“Then we translate those insights into practical tools: things educators, parents, youth, tech builders and policymakers can actually use. Technology can foster belonging when it helps people make intentional choices, increases understanding across differences and nudges people towards offline relationships and community participation.”
López hopes that business and entrepreneurship courses at Esade Business School along with events such as 4YFN will continue to ensure that technological progress supports social cohesion and environmental responsibility at the same time.
“This conversation is critical because we are at a decisive moment,” he says. “Technologies such as AI are scaling faster than our organisational, regulatory and leadership capabilities. The decisions we make now will shape not only business models, but also social and environmental outcomes for years to come.”




