Core responsibilities
-
Expand, maintain and optimize an existing On-Prem AI infrastructure in line with user needs and technical requirements
-
Manage local servers, GPU workstations, and network drives
-
Build and maintain Docker environments for AI tools, models and their deployment
-
Resolve compatibility issues between containers, data Storage systems and web interfaces
-
Develop internal RESTful APIs and secure network endpoints
-
Collaborate with IT on Firewall management and secure access to AI systems
-
Provide technical support to integrate open-source tools with local systems
-
Modify internal data sources and create basic data extractions or interface elements
-
Contribute ideas for infrastructure solutions that make AI models efficient and accessible to end users
-
Support model improvement and the extension of AI functionality (if of interest)
Required skills
-
Basic knowledge of Python (able to understand, debug and integrate scripts)
-
Experience with containerization (Docker or equivalent tools)
-
Experience with Linux and command-line tools
-
Familiarity with building or using RESTful APIs
-
Understanding of network settings and Firewall restrictions
-
Basic knowledge of configuration formats (JSON, YAML, .env)
Assets (nice to have):
-
Ability to work with or integrate SQL-like data sources
-
Knowledge of GPU configuration (CUDA, drivers, resource management)
-
Experience with simple web interfaces: front-end adjustments (HTML/CSS/JS) or back-end tools (e.g. Flask, Streamlit)
-
Experience with CI/CD workflows (automated testing/deployment)
Relevant degrees:
-
Master of science in
-
Computer science / informatics
-
Software engineering
-
Applied computer science
-
Artificial intelligence (with a technical focus)
-
-
Professional bachelor in
-
Applied computer science
-
Electronics-ICT (with experience in software development or network infrastructure)
-