Research &Development
We develop speech-to-text models optimized for enterprise environments, creating efficient and scalable solutions that bridge cutting-edge research with practical business applications.
In collaboration with the Intelligence Lab of ECE
What We Do
Our research focuses on developing and optimizing speech technology models specifically designed for enterprise use cases.
Speech-to-Text Models
We develop and optimize speech-to-text models specifically tailored for enterprise environments. Our models are designed to handle complex business terminology, multiple languages, and diverse audio conditions while maintaining high accuracy and low latency.
Post-Training & Customization
We specialize in post-training large language models to create custom pipelines for enterprise clients. Our approach enables companies to leverage state-of-the-art AI capabilities while maintaining control over their specific use cases and data.
Model Optimization
Our research focuses on making large language models more efficient and scalable for production deployments. We develop techniques that reduce computational costs while preserving or improving model performance.
Enterprise Integration
We bridge the gap between research and production by creating models that integrate seamlessly with existing enterprise systems. Our solutions are designed with security, compliance, and scalability in mind.

Research Papers
Our research contributes to the advancement of large language models and speech technology through peer-reviewed publications and open-source contributions.
Geometric Model Merging for Efficient and Scalable Adaptation of Large Language Models
We propose Layer-Adaptive Spherical Linear Interpolation (Layer-Adaptive SLERP), a novel merging strategy that combines fine-tuned large language models efficiently without additional training. Our method significantly improves merging stability and task-specific performance across multiple architectures and parameter scales.
Key Contributions
Impact
Our merged 7B variant attains competitive leaderboard performance and supports production-scale deployments, confirming the method's robustness and applicability to real-world adaptation tasks.
Why & How
Understanding our approach to speech technology and model development.
Why
Traditional fine-tuning approaches for large language models are resource-intensive and difficult to scale across multiple enterprise domains. We develop efficient methods that enable companies to leverage state-of-the-art speech technology without the prohibitive costs of training models from scratch.
Our research focuses on creating models that are not only accurate but also practical for real-world enterprise deployments, considering factors like computational efficiency, integration complexity, and domain-specific requirements.
How
We use advanced techniques like geometric model merging, post-training optimization, and layer-adaptive interpolation to create efficient and scalable solutions. Our approach combines multiple fine-tuned models into a single performant model without requiring additional training.
We work closely with enterprise clients to understand their specific needs, then develop custom pipelines that integrate seamlessly with their existing systems while maintaining high performance and reliability.