SPONSORED BY IBM
AI, particularly generative AI, is a fast-moving and exciting part of the tech world right now. But while some people are spending time with AI to generate poems about dogs and still images from movies that don’t exist, businesses are looking to integrate these technologies into their products efficiently and safely. We spoke with IBM’s Raj Datta and Savio Rodrigues to find out how they are building a better business-ready AI platform.
Ben Popper: Tell us a little bit about who you are and what it’s you work on.
Raj Datta: My name is Raj Datta, and I’m responsible for IBM’s Software and Technology partnerships globally for our ecosystem. I’m focused on speeding the adoption of responsible AI by helping partners embed our AI and data platform, IBM watsonx, into their solutions and bring those solutions to market.
Savio Rodrigues: I’m Savio Rodrigues, and I lead developer advocacy and our engineering efforts with partners, helping them embed IBM technology into their offerings. We have a global team of engineers, data scientists, and developer advocates who help partners adopt IBM technology through hands-on engineering assistance and co-creation.
Ben: With so many players jumping into the LLM space, why build your own foundation models, some with new architectures?
Savio: We’re of the strong opinion that one foundation model or one large language model is not going to rule the world. You’ve heard from other vendors in the space that they’re trying to build the biggest model out there that can do everything. We don’t ponder this is the right approach for most enterprise businesses.
We believe that smaller models tuned for a specific business use-case can be a more effective approach for developers and can help address issues around hallucinations, latency, and compute. So, you’ll see that across our partnerships, we are helping partners adopt specific models for industry and business needs. An example of that’s our work with NASA to help widen access to NASA earth science data for geospatial intelligence and accelerate climate-related discoveries. Another example is our partnership with Hugging Face, where we are selecting models that are more in line with typical enterprise use cases. This approach can help developers adopt generative AI in a fashion this is faster, cheaper, and with less risk.
Raj: We’ve seen an explosion of interest in AI from partners and clients across the market. One way we are helping accelerate adoption is through our portfolio of embeddable AI technology, a set of flexible and enterprise-grade AI products that partners can easily embed into their offerings. This way they don’t have to spend the majority of their time and resources on hiring data scientists and engineers to get their solutions up-and-running. For example, watsonx components can easily be embedded into a developer’s platform or solution, and then IBM will also help them take that solution to market.
In addition to addressing the cost and time it takes for partners to build AI-powered solutions, we are also helping our clients understand the provenance around data that’s used. This has been a major differentiator for us. IBM’s data collections were designed with enterprise uses in mind. However, if a partner chooses to use their own data, we stand behind our longstanding policy that such data belongs to the partner. That’s a very important distinction for software companies and why they value partnering with IBM.
Because IBM has taken such care in developing our models, we provide the same contractual intellectual property protections for IBM-developed AI models as we do for all of our products, helping to increase businesses’ trust in their AI journeys.
Ryan Donovan: You mentioned indemnification. I ponder it is interesting that there are three top-level components to IBM watsonx. There’s watsonx.ai, watsonx.data, and then watsonx.governance. Why is the governance part so important?
Savio: When scaling AI, governance is important, and it comes down to the trust that we and businesses have in our technology. Trust and governance need to be top of mind for every business, software provider and developer. Watsonx.governance helps businesses shine a light on AI models and eliminates the mystery around the data going in and the answers coming out.
Whatever a client’s governance approach is, watsonx.governance provides the tools for the client to ponder the entire process from the time a business leader decides they need a model all the way to the prompt they should use. Now let’s look at production: what if there’s an issue because of the data that’s coming in from inference versus what it was trained on? Watsonx.governance provides a programmatic way of telling you that. It allows partners and clients to determine whether the right data was used, where it originated, how it has evolved, and identify any discrepancies in data flows while also helping to adhere to regulatory requirements.
Raj: Trust and our focus on responsible AI is what sets IBM apart. Governance is critical to driving enterprise AI adoption, and all companies using AI need to put guardrails in place to help govern their AI component correctly.
That’s one of the key discussions that Savio and I regularly have with our partners. It comes down to having responsible AI, and IBM has spent a lot of time, resources, and effort to be at that point for our partners. This approach empowers their developers to be more confident AI creators by providing protections when they use our models and delivering governance capabilities that help them manage AI and mitigate risks.
Ben: You mentioned that within the studio, you could do things like prompt engineering. To what degree is there an end-to-end solution for somebody who wants to train their own model? How would a customer go about that?
Savio: IBM supports traditional machine learning and foundation models whether a partner is building an application with IBM technology or that from a third party. We find most developers are developing with notebooks and APIs versus in the studio itself. Both are supported and completely enabled through watsonx.ai. We encourage companies to try prompt engineering and prompt tuning first.
We have some internal research that shows a well-crafted prompt can offer similar results to fine tuning. If a developer takes a prompt tuning approach and sees the same results, then they are not changing the base foundation model and it will not impact the context window of their queries. This approach offers the best of both worlds in terms of performance and cost.
Ben: I don’t know if you’ve been through another tech cycle where it felt like things were moving this fast or there was so much uncertainty around why things worked a certain way and how to benchmark. What are some of the most exciting applications that you’re seeing with partners and clients? And can you connect that back to open source?
Savio: The open source community’s impact is vital to the speed, rate, and pace of innovation. That is well aligned with how IBM operates. We’ve been actively developing with open-source communities for decades. When we thought about building “what’s next,” we didn’t start with a proprietary approach. Instead, we determined how to take the best of what the community has already developed and ensure we’re contributing back.
Whether it’s Ray, PyTorch or CodeFlare for training and validating models, or Service Mesh and Hugging Face for tuning and serving models, we have developers contributing to those projects. So, what does this look like in the real world? I’ll share a few examples of how partners are benefiting from their collaboration with IBM.
Make Music Count is a partner in the edtech space using IBM watsonx Assistant to help students learn math and music through a conversational AI approach where the students can ask questions and get responses. This isn’t a simple bot; this is something that learns and understands what the student is asking, what their level of education is, and takes that into account with the response.
In the space industry, Ubotica Technologies is partnering with IBM to leverage IBM cloud infrastructure and watsonx.ai components, intending to simplify the process for a developer to get their application running onboard a satellite.
Raj: As Savio laid out, we’re seeing incredible interest in AI adoption from partners across industries, made up of all shapes and sizes – from startups to enterprises. That is one of the reasons I love my job at IBM. We’re able to help different sized companies navigate and scale responsible AI to address their business needs.
Ryan: So, if people want to learn more about watsonx, where should they go?