To go open or closed? It’s one of the key questions businesses face when building new LLM-based applications. Whether enterprise developers choose to build around open models or plug into proprietary APIs can result in trade-offs around customization, cost, security, and performance. At New York Tech Week in IBM’s Manhattan offices last week, a panel of experts talked about some of the considerations for businesses building on open models like Meta’s Llama family and Google’s Gemma. The AI Alliance, an IBM- and Meta-backed group that promotes open-source AI, hosted the event. The panel began with an exchange of definitions; this is important because there’s no fully agreed-upon standard for what is or isn’t an open-source model, though the Open Source Initiative has offered one up. It depends on which of the three broad components—model weights, source code, or training datasets—developers make available. When asked what type of systems they worked with, a show of hands failed to result in an obvious front-runner—sizable portions of the audience indicated support for open, closed, and hybrid models. Keep reading here.—PK |