‘We’ve come very far, very fast’
James Poulter is head of AI & Innovation at House 337, a London-based creative agency. He helps organisations navigate and adapt to AI disruption. Here are a few excerpts from his keynote talk at the Missional AI Summit on 8 April 2025. You can watch his entire talk here (English or auto-translate).
Where an unfamiliar term is mentioned, we link to an explanation of that term.

James Poulter (left) and Missional AI emcee Mark Matlock during a panel discussion.
A new revolution
We used to do simple things. The blacksmith, the shoemaker, the cobbler. And then the knowledge economy came and we had lawyers and we had doctors who gave advice and we had HR professionals and people managers and then assistant people managers and then the regional people manager and so on and so forth. And we did it because technology made things more complicated. It gave us opportunities to specialise and niche down. We were doing it because we were trying to pursue efficiency and we had to get deep knowledge into these sectors, because basically we had just created too many things for people to do.
And as we went further and further, we also had challenges scaling this stuff. Organisations went from being down your street in your local town and in the village to country level, multi-country level, regional, now global organisations. And so the complexity grew and grew and we added layers and spans. …
A transformed job market
And then, enter the 2020s. We’re all talking about jobs again, but not about there being more of them. Potentially about there being fewer. And I think it’s breeding a kind of strange change … in the way in which we feel about what it means to do work, any kind of work. Missional work, work out in the field, in the office, in business and commerce, in politics. There’s a lot of questions going on. And I think this would characterise where we’re at right now.
I feel like 2024 was the year of FOMO (fear of missing out) when it came to artificial intelligence. Jump on the bandwagon, get going, learn the tools, do more. Right? And then I think as we tip into 2025 and the start of this year, I think we’re now into what I would call the fear of AI obsolescence. … Now, this is not because all of us are about to have our jobs taken away tomorrow. But I do think we are reaching a tipping point. That idea that Malcolm Gladwell popularised some years ago, where there are just a couple of things that are going to nudge us past a point of no return into where we might head into a completely new way of doing things. …
What does it look like when we wrestle with getting all of the juice out of this technology, all of the efficiency, all of the gains, the productivity, and the wonderful things that it can do without giving up on the thing that we hold sacred, which is the value of what it means as humans to be on a mission in the world? …
Extreme acceleration
I think we have to understand what work is going to look like in the years to come—or maybe in the year to come. How did we get here? Well, 2022 we know was the AI explosion. ChatGPT bursts onto the scene and reaches 100 million users in less than two months. ... We are now reaching around about 400 million users a month of just that tool on its own. Crazy kind of numbers in two years. And I think as a result, we are all feeling a little bit of AI anxiety because this stuff has entered into our cultural experience in a way that just was impossible to imagine, that it would happen that fast. …
If you listen to what (Open AI Chairman) Sam Altman says, he thinks that by 2030, humans will be capable of doing what used to take them a month and do it in an hour. 2030. That’s five years from now. ... It’s not long.
How are we going to get there? Well, OpenAI and others think that the future is the agentic future. …
We can see that this future is emerging very, very quickly. It feels real. Even if this conference last year, we weren’t really talking about AI agents. 12 months on that’s the thing that we’re talking about. 12 months from now, I think we will be talking about what OpenAI calls their next levels route to AGI (artificial general intelligence) of innovators and organisations, entire systems and organisations created by AI and operating on their own.
A stunning forecast
I’ve been really struck this past week by a new paper by a number of research scientists and XAI team members … and this comes from the AI 2027 forecast. … This is where they think AI takeoff is heading that by somewhere in the mid-2027 point we reach some level of superhuman AI researcher and superhuman intelligent AI researcher.
Why is this significant? Because AIs are going to get better at training AIs. And when they get better at that, then they get better at doing everything else afterwards. This is the future that feels not five years away, but two, and I think it has really significant impact on what it looks like for the future of work. Thirty-one percent fewer specialised roles are needed for the same output, MIT Technology Review found last year. This means that the jobs that we currently have are beginning to collapse in on each other. And I think this is probably the trajectory that we head to.
AI plus humans equals … what?
Another paper just a couple of weeks ago popularised by Ethan Mollick, who is fantastic communicator on this topic, found that individuals who had access to AI performed as well as an entire team without AI in a number of different settings and that those teams with the AI where they added to the team AI on top were three times more likely to produce exceptional solutions. … And so we begin to see this amazing shift of what does it mean to add AI to a human and add AI to a team of humans? We suddenly get incredible levels of output. And we’ve come very far, very fast.
How many of you are design folks? Photoshop users? … Creative disciplines like photo editing have gone to … not just editing the photos, but creating them entirely. … So we’re creating and editing images, and we’re destroying what it means to be a graphic designer perhaps. And it’s coming … for things like web development as well. …
I’ve never written a line of code since those early days of the GeoCities HTML in the past two decades. And yet I found myself, since the start of this year, building entire applications front and back end, reprogramming superbase databases, creating MCP servers, doing everything all the way through this experience.
(He then demonstrated building a website for a megachurch in minutes)
I don’t know for the developers in the room how long this would take you usually, because I’m not a developer, but six months ago, my guess is this was a couple of days’ worth of work. … And it wasn’t just two to three days’ worth of work. It was two to three days’ worth of work as the product of two to three decades of learning. (Now) it’s two to three days’ worth of work done by someone that has spent no time doing that learning.
Collapsing sectors
And this is where we head. Now, I don’t know if the quality of this is going to be good. It’s certainly not going to be finished. But the limitation of your imagination to be translated into something that you can show somebody else just got completely evaporated. No more going and saying, “I had an idea. I thought it could kind of look like this.” Just, “Here it is. Go.”
This is powerful, powerful stuff. And we’ve seen this collapse happen in writing. We’ve seen it now collapse in image creation. We’re seeing it collapse in code. And we’re going to see it collapse in things like video and many other disciplines as well. And the power of these agentic features is that when we begin to add all of these things together, maybe we’ll not just have to do these individual tasks, but the agents will do all these tasks together.
Whether you’re in the operational side, whether you’re in the creative side, whether you’re in the development side, whether you’re in the technical advisory side, whether you’re a consultant, it doesn’t matter really what. These models from these different companies, these tools that are now embedding these models, are going to change fundamentally what it looks like. And I don’t think it’s going to take that long. And so, we’ve got to get ready for it.
Leaders needed
So I want to propose to you that we need to start thinking not just about having full-stack developers, but we need to have full-stack workers of every kind, of every discipline, because the job fragmentation that we’ve seen of the past two decades, I think is going to begin to reverse itself. I think we’re going to begin to see the verticalisation back into disciplines in a way that we had in the early trade craft of the industrial revolution and into the knowledge economy in the early ’60s. We’re going to see a professionalisation of these different classes come together, and that means that you humans have to be really great at what I’ll technically call the squishy stuff. The stuff that lives and breathes in this part of the world, not in the server.
Full-stack professionals … (are) going to lead people through immense transition and change in the next decade. They have to have deep empathy. We have to cultivate the skills that we are good at, which is the body language, which is the understanding, which is the cultural nuance, which is the interplay between different people in our organisations, teams and in the communities that we serve. We have to be able to think strategically outside of the context of the individual model and understand the world around us as things begin to change and move. We have to have fluency to be able to cross cultures and understand the nuances of what it means for different groups of people to come together in community, in business, in finance, in politics or wherever else they might be trying to operate.
And then the crucial skill for most of us will be that we have to learn how to orchestrate these AIs to do the technical stuff that previously was the job that you might have done.

James Poulter's AI creation of the full-stack professional action figure.
Opportunities to reinvent our work
We’ve heard this phrase said for a little while, and I may have been guilty of saying it myself: “AI is not coming to take your job. Just someone that knows how to use AI is.” And I think that’s wrong now. I think AI is coming to take a lot of these jobs, but we have a huge opportunity to recreate work around us and do different things. In the same way that the Industrial Revolution gave us, the AI revolution will give us a new opportunity for you not to have to be spending as much time farming in a field or you know creating horseshoes or whatever else we might have done. New opportunities are going to emerge. …
We have to bridge a talent gap. If you have not already got an AI policy in your organisation, it needs to be written today. If you have not got an AI training plan, not just for the technical members of your team, but every member of your team, it needs to start tomorrow. And if you have not begun to look at your talent pipeline for how you’re going to hire and who you’re going to hire in the next decade, then that needs to happen before the end of this quarter as well. …
New grads alone won’t solve the challenges
We have people currently going through universities and schools where this stuff is not yet in the curriculum and it’s not going to catch up fast enough. And so those of us that run organisations, run churches, run businesses, run political and community institutions, we’re going to have to pick up that lift because we’re not going to get there quick enough through the academic routes.
The academic routes will improve and if you work with academia, I really hope that you help them, but we also need to do our part in this. ... For those of people that are going to enter into their first five years of work, they’ve got a real challenge on their hands because I don’t know about you, but my first five years of jobs was mostly doing all of the administration stuff … that my boss did not want to do. How many of you filed papers and edited documents and took meeting notes and organised data and went and did the coffee run? The coffee run seems to be the only thing that may still persist in this instance. But that was how we learned.
And why were we given those things? Well, because we didn’t have tools to do them. So, we had to have people do them. But what were you also doing … in those first five years? You were watching the people around you. You were watching your peers. You were learning to follow and model after. You were being discipled into some form of work, into some form of job. And even though those tasks were repetitive and they weren’t particularly complex, the thing that you were getting was exposure to what did it mean to be a person operating in a community of others doing this type of job?
And so in the AI space, where those tasks become economically ludicrous, really, to give to a young person entering into their first few years of a career, we have to think about what are we going to give them to do? Because we can’t not give them anything to do. I don’t think that’s the answer. Some will say universal basic income. Some will say that we kind of just start crediting them crypto and hopefully they’ll be fine. But I don’t think those are good enough answers at the scale that we’re going to need them. … Particularly for the knowledge workers amongst us, for those of us that do the types of jobs that many of you do in this room, we need to think about what does it look like to apprentice people into these jobs, to invest into them in a way that we’ve not had to do before? Not because they can provide some widget utility or functionality, but because if we do not invest into them in the next five years of their careers, there won’t be a job for them to do at all next. You don’t get a good midweight lawyer, HR professional, IT person or any other discipline if they don’t get a chance to be a good junior one in the first place. They can’t just insert themselves in the midstack. We need them to be learning and observing.
And so I think we need AI-enabled observer roles where they learn to watch AIs do really complex tasks and orchestrate with humans on how they develop better critical thinking to be good at those higher-level tasks in the future as their careers develop. And we need them to build that AI competency set. We need them to be experts at it. And many of them will be because they’re already open-minded and doing it. They haven’t already got the embedded and entrenched ways of doing things that many of us have accrued over a couple of decades. …
Qualification by personality type
And I think if we do this, then we might begin to see a more diversified way of workers beginning to emerge, probably more aligned to the types of personality that they have, rather than the skills that they’ve been trained in.
For those of you that have taken and probably criticised things like the Myers Briggs test or the DISC profile or the color wheel or the Enneagram or you know, pick your kind of slightly pseudoscientific personality typing of choice. They all have their merits and all have their fallibilities of course, but I do think that they reveal a certain thing which we’ve seen and I think we believe is also biblical — that there are certain types of things that God has created and given to us to do: to love, to care, to teach, to disciple, to evangelise, to administer. These are the things that we’ve known for all of human time that we are good at. And then some of us are better at other parts of it than others. And so we need to cultivate people up into the skills that they have and help them mature into careers that will hopefully allow them to go better and go further.
Humans? No humans?
So, we’ve got people working with the AIs and I think … the most crucial question we can be asking ourselves is, what does it look like for the human to be in the loop? In the loop, on the loop or out of the loop? Will you allow the AI to do the job on its own? Will you have the human observing, helping somewhere along the way? Or are there just some things that we don’t want to give up? These are crucial questions and there are too many variability options to answer on what it looks like for you in your job. But I think this is maybe one of the most crucial questions we ask ourselves over the next 18 to 24 months as we head toward that AI super-intelligent researcher becoming a reality, if that timeline is even nearly correct.
Digital equity and access to intelligence
Many of you work for people groups around the world in different places where the access to AI technology and digital inclusion has been a problem for the past couple of decades as we’ve seen the internet deployed, mobile phones and the emergence of social media. We need to ensure that that does not get worse in the AI space, because we may only further entrench the inequality that we already see around the world. We have, I think, a responsibility—particularly for those in the West and I would say particularly for those Americans in the room where many of these technologies are being developed—to influence the way in which this stuff becomes available around the world. I think you hold a particular burden which is to ensure that we have global and equitable access to this AI and to the intelligence that we are aggregating. Not just so that they have access to Western intelligence but so that we have access to global … intelligence. I don’t have all the answers. Neither do you. But if we’re going to aggregate all of them somewhere, then we should be aggregating them from everywhere.
I could be way off. I could be 10 years off. I could be 20 years off. But does that really change anything? I got my first smartphone, I think it was the iPhone 3s came out or something like that. That was 18 years ago. 18 years. Does that feel recent to you? So, whether it’s two or five or 10 or 50 to be honest, your children and mine, our grandchildren, are going to be living with whatever this looks like. And so I think it’s up to us to do something about it.
I don’t say any of this stuff to scare you. That’s really not the intention. I’m also not down on this. We’re here to elevate what this means, right? To seek for higher opportunities than what might just happen without our intervention. And I think particularly for us as the church, as God’s people, we have one huge responsibility. Let us not be late to this party. Let us be the ones who push this forward, but push it in the right way, the way that we want to see it happen.
•••
Two global forecasts
James Poulter mentioned two published visions of what the AI revolution could mean to the world in the next five years:
In April 2025, four noted AI researchers and forecasters published this detailed, chilling scenario for the next five years as AI accelerates. They describe an escalating AI arms race between the U.S. and China, with disastrous consequences. By late 2027, they believe an actor with control over artificial superintelligence (ASI) could gain control over humanity’s future – which at that point only lasts three more years.
Don’t read this one before bedtime.
James Poulter fed the AI 2027 paper to the AI tool Claude, asking it to produce a response scenario about what the world will look like for the global church.
•••
Story: Jim Killam, Wycliffe Global Alliance
Translated with DeepL. How was the translation accuracy? Let us know at info@wycliffe.net
Alliance organisations are welcome to download and use images from this series.
News
View all articles
05/2025 Global

05/2025 Global
Tech pioneer: Christians ‘have to show up’ for AI
Silicon Valley pioneer Pat Gelsinger was CEO of Intel Corporation until December 2024. Quickly realising his career in technology was not finished, he joined the faith/tech platform Gloo in early 2025 as the executive chair and head of technology. He is also a general partner at the venture capital firm Playground Global. Gelsinger was instrumental in the development of cloud computing, Wi-Fi, USB and many other everyday technologies. He estimates his work has touched 60 to 70 percent of humanity. Here are highlights of his keynote talk at the 2025 Missional AI Summit. You can watch his entire talk here. Pat Gelsinger (left) is interviewed onstage by Steele Billings. Both are with Gloo. Watch the full interview here. Is technology good or bad? Technology is neither good nor bad. It’s neutral. It can be used for good. It can be used for bad. … If you think back to the Roman roads, why did Christ come when he came? I’ll argue the Pax Romana and the Roman roads. … The greatest technology of the day was the Roman road system. It was used so the Word could go out. Historical example I will argue Martin Luther was the most significant figure of the last thousand years. And what did he do? He used the greatest piece of technology available at the day, the Gutenberg printing press. He created Bibles. … He broke, essentially, the monopoly on the Bible translations …. He ushered in education. He created the systems that led to the Renaissance. That’s a little punk monk who only wanted to get an audience with the pope because he thought he had a few theological errors. I’ll argue (Luther was) the most significant figure of the last thousand years, using technology to improve the lives of every human that he touched at the time. How today compares to the dawn of the internet AI is more important. AI will be more significant. AI will be more dramatic. … This is now incredibly useful, and we’re going to see AI become just like the internet, where every single interaction will be infused with AI capabilities. In the 75-year-or-so history of computing, we humans have been adapting to the computer. … With AI, computers adapt to us. We talk to them. They hear us. They see us for the first time. And now they are becoming a user interface that fits with humanity. And for this and so many other reasons that every technology has been building on the prior technology, AI will unquestionably be the biggest of these waves, more impactful even than the internet was. On the need for AI development to be open-source It is so critical because we’re embedding knowledge, embedding values, embedding understanding into those underlying models, large language models and every aspect that happens. It must be open, and this is part of what I think is critical about us being together here today. We need to be creating trusted, open, useful AI that we can build humanity on. On the need for Christians to help build AI systems We have to show up as the faith community to be influencing those outcomes, because remember what happened in the social media. We didn’t show up, and look at what we got. So are we going to miss this opportunity for something that’s far more important than social networking with AI? Where it truly in the models embeds every aspect of human history and values into it? We have to show up, team. What we do with large language models is far more important because truly we are choosing how we embody knowledge of all time into those underlying models. They need to be open. They need to be trusted. What Christians must bring to the process If we’re going to show up to influence AI broadly, we have to show up with good engineering, good data, good understanding, good frameworks. How do you measure things like ‘Is that leading to better character? Is that leading to better relationships? Is that creating better vocational outcomes? Is that a valid view of a spiritual perspective?’ We need good underlying data associated with each one of these. And for that we’re actively involved. We’re driving to create that underlying data set. Because we need to show up with good data if we’re going to influence how AI is created. How should this work? For the AI systems we need to create good benchmarks. If I ask about God, does it give me a good answer or not? If I ask about relationships with my children, does it give me good answers? We need to create the corpus of data to give good answers to those questions. And, armed with that good data, we need to show up to influence the total landscape of AI. We want to benchmark OpenAI. We’re going to benchmark Gemini. We’re going to benchmark Claude. We’re going to benchmark Copilot. This is what we’re going to do at Gloo, but we want to be part of a broader community in that discussion so that we’re influential in creating flourishing AI. Technology is a force for good. AI that truly embeds the values that we care about, that we want to honour, that we want to be representing into the future and benchmarking across all of them. Oh his role with Gloo We are going to change the landscape of the faith community and its role in shaping this most critical technology, AI, for faith and flourishing. That’s what we’re going to do at Gloo and we need all of your help and partnership to do so because if we don’t hang together, we’re not going to influence the outcome, right? ‘Here am I, Lord’ I don’t think I’m done. … You and I both need to come to the same position like Isaiah did. Here am I, Lord. Send me. Send me. Send us. That we can be shaping technology as a force for good. That we could grab this moment in time. This is the greatest time to live in human history. We’re going to solve diseases. We’re going to improve lives. We’re going to educate every person in poverty. We are going to solve climate issues. We are going to be using these technologies to improve the lives of every human on the planet. We are going to shape technology as a force for good. Here am I, Lord. Send me. ••• Story: Jim Killam, Wycliffe Global Alliance Translated with ChatGPT. How was the translation accuracy? Let us know at info@wycliffe.net. Alliance organisations are welcome to download and use images from this series.
Read more
Global
AI opens a new world for sign language translation
Caio Cascaes of DOOR International During his sign-language presentation at the 2025 Missional AI Summit, Caio Cascaes of DOOR International showed three examples of how AI is expected to accelerate Bible translation for hundreds of sign languages. Chameleon Chameleon is already available and being deployed. Previous sign-language videos required the human signer to wear marking sensors for the camera to track motion. Chameleon can capture human signing with multiple cameras and without the need for those markers. Then, it can create avatars – AI-generated figures that can vary by physical appearance, age, gender, background and even signing styles. Along with producing avatars that match the audience, this also protects the human signer’s identity if security is an issue. The initiative represents a balance, Cascaes signed, ‘still keeping to the truth and integrity of God’s Word but doing it in a way that’s most relevant for the current demographics and the people we’re serving.’ A sign language translator works on the Old Testament book of Ruth for Chameleon. Lava Lava is under development by SIL. It’s a video and optimisation tool that is language agnostic – meaning its underlying code can be used to translate any sign language. In the example video Cascaes showed, a .png photo of an East Asian man was uploaded to Lava, which animated the man’s likeness into a sign-language video. Then, for another community, a different .png photo could be used to animate another video using the same sign motions. ‘It has been such an incredible benefit’, Cascaes signed. ‘This technology can be leveraged in countries that are incredibly sensitive where the signing talent, if their identities were known, would be a life-threatening situation. ‘And so in an effort to preserve them, of course, and to preserve the integrity of God’s Word, we now have the ability, leveraging this technology, to make it so that their identities can be totally obfuscated and they and their families can be kept safe.’ Avodah Connect Finally, Avodah Connect is developing technology that can search and replace specific signs used by an avatar in a video. A translation team may want to use, or not use, a specific sign not present in the original video. This technology will keep them from having to re-record entire videos. ‘This sign language searchability is going to be able to save so much time, so much energy, and accelerate the process of being able to swap out certain signs and make it more applicable and match the type of work you’re trying to do for the audience you’re trying to serve’, Cascaes signed. ••• Watch the entire presentation here. Separate session video: Avodah Connect Story: Jim Killam, Wycliffe Global Alliance Story translated with ChatGPT. How was the translation accuracy? Let us know at info@wycliffe.net Alliance organisations are welcome to download and use images from this series.
Read more