The USAA Compute Hub, hosted by the Center for Data Science, is a shared research infrastructure that provides programmatic access to a wide range of open-source large language models (LLMs), including both text and vision-language models. Its primary focus is to support undergraduate student learning opportunities within the College of AI, Cyber and Computing by giving students hands-on access to modern AI tools and infrastructure. Through standardized APIs, students and researchers can experiment with, develop, and evaluate AI systems for tasks such as natural language processing and multimodal reasoning. The platform emphasizes openness and extensibility, allowing new open-source models to be integrated as the ecosystem evolves and ensuring that students engage with current, real-world AI technologies.
For more details or inquiries on how to gain access, interested users should contact anthony.rios@utsa.edu or james.benson@utsa.edu. In your request, please state your department and a short description of how you plan to use the resource. Access is granted on a rolling basis, with priority given to educational activities that directly support undergraduate instruction, coursework, and student-led projects within the College of AI, Cyber and Computing.
The USAA Compute Hub is powered by a high-performance, GPU-accelerated cluster designed to support large-scale AI workloads and hands-on undergraduate learning. The system includes multiple Dell PowerEdge compute servers connected through high-speed networking to enable efficient distributed training and inference.
The cluster features three Dell PowerEdge R660 nodes equipped with dual Intel Xeon Gold processors, 64GB of RAM per node, and a mix of SSD and BOSS storage for fast local data access. These nodes provide general compute capacity for experimentation, lightweight model hosting, and coursework.
For GPU-accelerated workloads, the system includes two Dell PowerEdge R760XA servers configured with Intel Xeon Gold processors, 1TB of RAM per node, and high-speed NVMe storage. Each R760XA node is equipped with four NVIDIA H100 NVL GPUs, enabling large-scale training and inference for modern language and vision-language models. High-bandwidth 100GbE networking connects all nodes through redundant enterprise switches, ensuring low-latency communication and reliable performance.
Together, this infrastructure provides a scalable environment for running open-source LLMs and multimodal models, supporting both research experimentation and intensive undergraduate training in modern AI systems.