

"It's just heartache." A call for better regulation "At the end of the day, no one wins when a builder goes broke - builders, the Tax Office, subcontractors, and, obviously, home owners," Michael says. They pay people to manage the jobs - they have an accounts team they have a marketing team," which amounts to "a lot of fixed costs", he explains. "A lot of builders who are going broke … are office-based. Recent price rises are a problem for large construction firms operating at low margins of 7 to 8 per cent, Michael says. These figures mean there's no buffer when costs increase. These days, it's a situation he's seeing more often. They're struggling financially, too," Michael says. " got two young children they're living with their parents. The completed work was substandard and underinsured, creating problems for the owners, including a $400,000 budget blowout. (Source: Phil Dwyer, builder and president of the Builders Collective of Australia)

Nishihara said that when LightGPT became available, Aviary was able to add support for it in less than five minutes. Initially it took a bit of time to get the right configuration for any one open-source LLM, but what has become clear is that there are common patterns across all LLMs for deployment. As new LLMs emerge, Aviary will enable them quicklyĪviary has been in private development at Anyscale for the last three months. The comparisons enabled via Aviary include accuracy, latency and cost. Nishihara said that by making it easier to deploy open-source LLMs, Aviary is also making it easier for organizations to compare different LLMs. With the growing number of models, it’s not easy for anyone to know the best model for a specific use case. Users don’t have to go through a time consuming process of figuring out infrastructure configuration on their own Aviary handles all that for them.Īviary also aims to help solve the challenge of model selection.

The goal with Aviary is to have pre-configured defaults for essentially any open-source LLM on Hugging Face. Nishihara explained that there are many different things that need to be configured on the infrastructure side, including model parallel inference across multiple GPUs, sharding and performance optimizations.

The goal with Aviary is to automatically enable users of open source LLMs to deploy quickly with the right optimizations in place. Ray is already widely used by large organizations for model training and is the technology that OpenAI uses for its models including GPT-3 and GPT-4. The Aviary project builds on top of the open-source Ray project with a set of optimizations and configurations to ease LLM deployment of open-source models. Register Now How Aviary works to ease open source LLM deployments
