Für unseren Kunden suchen wir einen "Cloud Engineer" (m/w/d).
Projektbeschreibung:
Im Rahmen eines langfristigen Cloud-Projekts unterstützen Sie unseren Kunden beim compliance-konformen Aufbau und der Konfiguration moderner Applikationen in Kubernetes-Umgebungen. Ziel ist es, den Betrieb und die Weiterentwicklung der Cloud-Infrastruktur gemäß den hohen Sicherheits- und Qualitätsanforderungen beim Kunden sicherzustellen. Sie leisten einen entscheidenden Beitrag zur Automatisierung, Skalierbarkeit und Effizienz der IT-Services und gestalten aktiv die Entwicklung innovativer Cloud-Lösungen mit.
Rahmenparameter:
Start: asap
Laufzeit: bis 31.12.2025
Auslastung: Vollauslastung (100%)
Einsatzort: 90% Remote, ca. 5–10% vor Ort (Standort in DE, noch nicht bekannt)
Budget Stundensatz: ca. 80,00 EUR - 90,00 EUR
Hinweis: Bitte beachten Sie, dass für diese Position die deutsche Staatsangehörigkeit Voraussetzung ist, da eine Sicherheitsüberprüfung nach dem Sicherheitsüberprüfungsgesetz (SÜG) durchgeführt wird.
Aufgaben:
- Compliance-konformer Aufbau und Konfiguration von Applikationen (z. B. GitLab, PostgreSQL) in Kubernetes gemäß Hersteller-Referenzarchitektur
- Weiterentwicklung des Service-Monitorings und Alertings mittels Kube-Prometheus-Stack, vollständig per GitOps
- Entwicklung einer eigenen Cloud-API unter Verwendung des Kubernetes Resource Model auf Basis kcp.io
- Management von vmWare ARIA-Ressourcen in Kubernetes per Crossplane
- Aufbau und Betrieb eines internen Developer Portals auf Basis der Cloud-API, z. B. mit Backstage
Erfahrungen & Kenntnisse:
- Mindestens 3 Jahre Erfahrung mit Kubernetes
- Mindestens 3 Jahre Erfahrung mit ArgoCD & GitOps
- Mindestens 3 Jahre Erfahrung mit Linux
- Mindestens 2 Jahre Erfahrung mit Crossplane
- Mindestens 2 Jahre Erfahrung mit Terraform oder OpenTofu
- Mindestens 1 Jahr Erfahrung mit Go
- Wünschenswert: Erfahrung mit KCP, Ansible, Zertifikatsmanagement (z. B. Certmanager), Sysdig, Nexus, Netzwerktechnologien (z. B. Firewall, Routing, Ingress)
Ihre Suchanfrage
Für unseren Kunden suchen wir einen Experten (w/m/d) zur Erstellung eines VEEAM Backup Konzeptes.
Rahmendaten:
Start: Juli
Laufzeit: 3-4 Monate++
Auslastung: Wenn möglich in Vollzeit
Einsatzort: Köln, ggf. Potsdam, remote (regelmäßige Abstimmungsmeetings onsite)
Aufgaben:
- Ausarbeitung eines neuen Back-up Konzeptes für den Kunden, der eine komplexe und heterogene Infrastruktur betreibt
- Analyse der Anforderungen der Fachbereiche und Überführung in ein verständliches Konzept
- Beratung des Kunden zu Fragen bzgl. der Back-up-Infrastruktur und der Prozesse
Skills:
- Konzeption von Back-up-Umgebungen
- Veeam Backup
Projektbeschreibung:
Im Rahmen dieses Projekts unterstützen Sie das Team der zentralen Datenplattform einer renommierten Bank. Nach der erfolgreichen Einführung von Kafka liegt der Fokus dieser Rolle auf dem Aufbau und der Weiterentwicklung von Data-Lineage-Lösungen. Ziel ist es, die Nachvollziehbarkeit und Transparenz sämtlicher Datenströme über die zentrale Plattform hinweg sicherzustellen. Hierbei kommen insbesondere OpenLineage und Marquez zum Einsatz. Ihre Expertise trägt maßgeblich dazu bei, regulatorische Anforderungen zu erfüllen und Datenflüsse effizient zu dokumentieren.
Rahmenparameter:
Start: asap
Laufzeit: Bis Ende 2025, mit Option auf Verlängerung
Auslastung: Vollzeit
Einsatzort: Hannover/Remote (überwiegend remote 95%, gelegentliche Präsenztage in Hannover)
Budget: Max. Stundensatz: 70€/h remote, 76€/h vor Ort
Aufgaben:
- Konzeption, Implementierung und Betrieb von Data Lineage-Lösungen im Kafka-Umfeld mithilfe von OpenLineage und Marquez
- Integration von Data Lineage in bestehende Kafka-Datenströme und ETL-Prozesse
- Entwicklung von Schnittstellen und Konnektoren zur automatisierten Erfassung und Dokumentation von Datenflüssen
- Zusammenarbeit mit Data Engineers, Data Stewards und Compliance-Teams zur Sicherstellung regulatorischer Anforderungen an die Daten-Nachvollziehbarkeit
- Beratung bei der Auswahl und Einführung geeigneter Tools und Methoden für Data Lineage
- Unterstützung bei der kontinuierlichen Verbesserung der Datenplattform und deren Dokumentation
Erfahrungen & Kenntnisse:
Erforderlich:
- Mindestens 5 Jahre Erfahrung in der Entwicklung von Datenplattformen und/oder Streaming-Lösungen
- Fundierte Kenntnisse in Apache Kafka
- Erfahrung mit Data Lineage Tools, idealerweise OpenLineage und Marquez
- Sehr gute Programmierkenntnisse in Java (alternativ Python)
- Erfahrung mit Datenbanken (z. B. PostgreSQL, DB2) und der Integration von Datenquellen
Wünschenswert:
- Erfahrung im Bankenumfeld oder in der Finanzbranche
- Kenntnisse im Bereich Data Governance und regulatorische Anforderungen (z. B. BCBS 239, DORA)
- Vertrautheit mit agilen Methoden
- Fließende Deutsch- und Englischkenntnisse
Für unseren Kunden suchen suchen wir derzet einen OLVM (Oracle Linux Virtualization Manager) Experte (m/w/d).
Start: asap
Laufzeit: Bis Jahresende +
Standort: 100% remote
Auslastung: ca. 50%
Gesucht wird ein OLVM-Experte (m/w/d) mit Fokus auf das Design, die Implementierung und den Betrieb hochverfügbarer OLVM-Umgebungen.
- Verantwortlich für die Entwicklung von Konzepten, die Installation, Konfiguration und Automatisierung von OLVM-Clustern sowie die enge Zusammenarbeit mit dem technischen Service-Infrastrukturteam bei der Inbetriebnahme.
- Fundiertes Wissen im Monitoring und im stabilen Betrieb der Virtualisierungsplattform.
Konzeption und Design:
- Erstellung von umfassenden Konzepten für OLVM-Virtualisierungsumgebungen mit Fokus auf Skalierbarkeit und Hochverfügbarkeit gemäß Best Practices.
Design hochverfügbarer OLVM-Umgebungen:
- Erfahrung im Aufbau von redundanten und ausfallsicheren OLVM-Clustern, um unterbrechungsfreien Betrieb sicherzustellen.
Installation und Konfiguration:
- Umfassende Kenntnisse in der Installation und Einrichtung von OLVM-Clustern, inklusive Netzwerk-, Storage- und Ressourcenmanagement.
Inbetriebnahme:
- Koordination und Durchführung der Systeminbetriebnahme in enger Abstimmung mit dem Technischen Service-Infrastrukturteam.
Automatisierung:
- Entwicklung und Umsetzung von automatisierten Installationsprozessen für OLVM-Systeme sowie Oracle Enterprise Linux VMs (z.B. mittels Skripten, Konfigurationsmanagement).
Monitoring und Betrieb:
- Know-how im Aufbau und Betrieb von Monitoring-Lösungen zur Überwachung der OLVM-Plattform, um Performance, Verfügbarkeit und Sicherheit zu gewährleisten.
For our client we are looking for a (Junior) Cloud Security Architect (f/m/d) who, after a three-month project as a freelancer, is open to a permanent position, assuming the conditions are right.
Start: 15.08.2025
Duration: 3 months in freelance, long term in permanent
Capacity: 100% if possible
Location: 75% Remote, 25% Berlin (1 week Berlin / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English, German is a plus
Annual salary in permanent position: 65.000 EUR
Team:
The Security Architect consults the (Platform) Security Architects and CRS sub streams in developing and maintaining secure platform architectures by contributing to security design, threat modeling, and compliance activities.
Tasks:
- Objective: Consult platform and security architects
- Tasks: Consult platform architect and product line architects in following areas: security architecture guiding principles for the platform, platforms access controls, integration points, secure design principles, perform threat modeling to identify and address potential platform risks, cybersecurity framework, collaborate with the platform architecture team to integrate security into designs
- Objective: Consult in security architecture management processes
- Tasks: Consult with cross-functional teams (e.g., platform architects, product owners, compliance teams), documentation on processes around security and security architecture processes
Skills (must-have):
- Familiarity in security architecture principles, secure design patterns, and frameworks.
- Familiarity in following security domains: Security Architecture and Design, Cloud Security, Identity and Access Management (IAM), Application Security, DevSecOps and Automation, Incident Response and Resilience, Cryptography and Data Protection
- Familiarity with threat modeling methodologies and risk assessment.
- Experience to design and implement security and compliance controls for platforms.
- Experience in translating technical security requirements into actionable designs and documentation.
Skills (should-have):
- Cross-functional collaboration skills to work with technical and non-technical stakeholders.
- Experience with DevSecOps practices and tools for integrating security into platform development
- Experience with cloud posture management and detection tools (CSPM, KSP, Workload protection)
- Experience with baseline detection and response toolsets (SIEM, EDR, XDR)
- Good command and understanding of security & compliance standards and frameworks including ISO/IEC 27001, CSA CCM, BSI Grundschutz, CSI, NIST CSF, NIST OSCAL, etc.
- Basic understanding of sector-specific regulations (e.g. NIS2, CRA, KRITIS, BSI C5, …)
- Certification in (security) architecture or cloud security (e.g., CISSP, SABSA, TOGAF, CCSK)
- Strong cross-functional collaboration skills to work with technical and non-technical stakeholders
For our client we are looking for a Cloud Native Data Engineer (f/m/d).
Start: asap
Duration: 31.10.2025 (long term engagement (2026))
Capacity: 100% if possible
Location: 75% Remote, 25% Brussels, Belgium (1 week Brussels / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English
Project:
As part of its critical role in the energy sector, the client is developing a next generation Settlement System to replace current legacy applications which are very difficult to maintain, reaching end-of-life and no longer suitable for complex developments. This new system aims to support the company's strategic objectives by:
- Accelerating settlement cycles to enable near real-time invoicing, thereby reducing credit risk and enhancing financial forecasting.
- Enhancing scalability and automation to handle increasing data volumes with minimal manual effort.
- Improving data quality and validation, ensuring accurate, consistent, and timely settlements through automated controls.
- Enabling advanced analytics and self-service insights by providing real-time, granular data access via EPIC and a centralized Data Lake.
- Supporting simulations to evaluate potential savings or impacts under various consumption scenarios.
- Standardizing market communication and integrations to seamlessly incorporate new metering technologies and automated data exchange with DSOs.
This initiative is essential for ensuring a future-proof, transparent, and agile settlement process aligned with product vision and the evolving energy market landscape.
Tasks:
- Design, Develop and Maintain of a modular and scalable data architecture
- Efficient Data Modeling and ensuring of robust data integration
- Drive application migration to cloud native infrastructure
- Ensure consistent documentation
Skills (must-have):
- A minimum experience of 5 years in Cloud Native Data Engineering.
- Experience with rearchitecting existing monolithic architecture to micro-services based Cloud Native architectures.
- Strong understanding of Cloud Native architectures (loosely coupled services, containers, horizontal scalability, application resilience patterns).
- Proficiency in at least one programming language – Java or Scala
- Knowledge and experience with at least some of the Data technologies/frameworks:
o Workflow orchestration (AirFlow/Oozie etc.)
o Data integration/Ingestion (Nifi, Flume etc)
o Messaging/Data Streaming (Kafka/RabbitMQ etc.)
o Data Processing (Spark, Flink etc.)
o RDBMS (PostgreSQL/MySql etc.)
o NoSQL Storages (MongoDB, Cassandra, Neo4j etc.)
o Timeseries (InfluxDB, OpenTSDB, TimescaleDB, Prometheus etc.)
- And / Or with their Cloud provided counterparts, i.e., Cloud Data/Analytics services (GCP, Azure, AWS)
- Proficiency in the following Tech Stack:
- Deployment & Containerization: Docker, Kubernetes, Helm.
- CI/CD & DevOps Tools: Azure DevOps, Gitlab CI Actions, GitOps, Gitlab, Bash/Shell scripting, Linux
- Database change management: tools (such as Liquibase or Flyway)
- Familiarity with agile development methodologies and tools (e.g., Scrum, SAFE, JIRA, Confluence).
Skills (should-have):
- Relevant certifications in cloud and Cloud Native technologies
For our client we are looking for a Cloud Native Data Architect (f/m/d).
Start: asap
Duration: 31.10.2025 (long term engagement (2026))
Capacity: 100% if possible
Location: 75% Remote, 25% Brussels, Belgium (1 week Brussels / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English, German is a plus
Project:
As part of its critical role in the energy sector, the client is developing a nextgeneration Settlement System to replace current legacy applications which are very difficult to maintain, reaching end-of-life and no longer suitable for complex developments. This new system aims to support the company's strategic objectives by:
- Accelerating settlement cycles to enable near real-time invoicing, thereby reducing credit risk and enhancing financial forecasting.
- Enhancing scalability and automation to handle increasing data volumes with minimal manual effort.
- Improving data quality and validation, ensuring accurate, consistent, and timely settlements through automated controls.
- Enabling advanced analytics and self-service insights by providing real-time, granular data access via EPIC and a centralized Data Lake.
- Supporting simulations to evaluate potential savings or impacts under various consumption scenarios.
- Standardizing market communication and integrations to seamlessly incorporate new metering technologies and automated data exchange with DSOs.
This initiative is essential for ensuring a future-proof, transparent, and agile settlement process aligned with product vision and the evolving energy market landscape.
Tasks:
- Design and Implement Cloud Native Data Architecture for the new application on a new Cloud Platform
- Define and maintain business-aligned data models
- Migration if Data-Intensive legacy Applications to the new platform
- Continuous Improvement through Documentation and Consulting of Stakeholders
Skills (must-have):
- A minimum experience of 3 years as a Cloud Native Data Architect
- Deep understanding of data architectures (incl. data quality and data SLOs) in the context of cloud-native and distributed environments and the ability to articulate data concepts to different audiences
- Proven experience in helping development teams navigate the CAP theorem with trade-offs when choosing data storage technologies
- Experience in building data solutions that enable data democracy in the organization and prevents data silos while ensuring data governance and compliance
- Strong knowledge of data security and the ability to specify at an architectural level the security best practices that must be incorporated when working with Data (encryption, access control, data classification and compliance, etc)
- Demonstrated experience in explaining the maturity required to use cloud native data technologies – not only on the IT side but especially on the Business side of an enterprise.
- Prior experience with the following technologies:
o Relational Databases (eg: PostgreSQL)
o NoSQL Databases (eg: MongoDB, Cassandra)
o Time series Databases (eg: TimescaleDB, InfluxDB)
o Graph Databases (eg: Neo4j)
o Data Warehouse and Data Lake architectures
o Messaging/Data Streaming Systems (Apache Kafka)
o Data processing and workflows (eg: Spark, Dagster, Apache Airflow, etc)
- Proficiency in at least one programming language – Python, Java or Scala
- Familiarity with agile development methodologies and tools (e.g., Scrum, SAFE, JIRA, Confluence)
Skills (should-have):
- Relevant certifications in cloud and Cloud Native technologies.
- Understanding about the principles of Data Mesh & data products
For our client we are looking for a Product Portfolio Team Lead (f/m/d) Focus Cloud Compute.
Start: asap
Duration: 31.10.2025 (long term engagement (2026))
Capacity: 100% if possible
Location: 75% Remote, 25% Berlin (1 week Berlin / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English, German is a plus
Team:
The Compute portfolio within the Product Line Infrastructure delivers a comprehensive virtualization and OS lifecycle management environment, ensuring consistent, scalable, and secure infrastructure services across the organization. It comprises two sub-portfolios:
• Compute Virtualization, which focuses on the development and lifecycle management of virtualization platforms (VMM, HVM, K8M), including the planned strategic migration from VMware to KVM.
• OS Image Manager, responsible for managing the full lifecycle of operating system images using Red Hat Satellite and the Ansible Automation Platform.
The portfolio is steered both strategically and operationally to cover the full product lifecycle—from specification and architectural design to deployment and operational stability. Product management responsibilities include defining product requirements, aligning with stakeholders, drafting product specifications, and evaluating and introducing new platform solutions in alignment with evolving infrastructure needs.
Coordination within the Product Line ensures that the portfolio remains aligned with overall infrastructure strategy and delivers consistent value through standardized, reliable, and future-ready compute capabilities
Objectives & Tasks:
- Define and Own Product Strategy & Portfolio Management
- Oversee Product Development Lifecycle
- Coordinate a Portfolio
Tasks:
o Compute Virtualization – Responsible for VMM, HVM, and K8M lifecycle management.
o OS Image Manager – Oversees OS image lifecycle, Red Hat Satellite, and Ansible Automation Platform.
- Product Management
- Promote Cross-Functional Alignment
- Define Portfolio Governance
Skills (must-have):
- 7+ years of experience in IT infrastructure, preferably in compute/virtualization domains.
- 3+ years of experience in product management, product ownership, or portfolio leadership roles.
- Experience with compute products in a private or public cloud environment.
- Proven track record in managing complex technical products or services across their lifecycle.
- Understanding of virtualization technologies (e.g., VMware, KVM, Hypervisors).
- Experience defining product strategy and roadmaps.
- Skilled in writing clear product specifications and acceptance criteria.
- Strong leadership skills with the ability to lead cross-functional teams and manage an entire product portfolio team.
Skills (should-have):
- Familiarity with containerization and orchestration platforms (e.g., Kubernetes).
- Knowledge of OS image lifecycle management, RedHat Satellite, and automation platforms like Ansible.
- PO or Project Management certification
- Experience in a private cloud build-up
For our client we are looking for a Domain Architect (f/m/d) Cloud
Start: 28.07.2025
Duration: 31.10.2025 (long term engagement (2026))
Capacity: 100% if possible
Location: 75% Remote, 25% Berlin (1 week Berlin / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English, German is a plus
Team:
The ESL Product Line is responsible for a product portfolio central to the platform, consisting of an Infrastructure as a Service Product, a
managed Kubernetes Service, a resource management service to facilitate scalable management of platform permissions and a service lifecycle workflow engine enabling. All services together constitute a core part of an on-premise private cloud platform for all business applications of the client, including IT/OT critical applications required for maintaining and operating.
For the whole product portfolio, the product line owns the complete product flow, from product management, architecture, delivery up until Tier 3 operations.
Tasks:
- Architecture Design: Designing and overseeing IT systems that meet business needs
- Technology Evaluation: Assessing and recommending technologies and tools that best meet organizational needs
- Architecture Guidance: Providing architectural consultancy and coordination across ESL to ensure successful implementation
- Structure the created designs and POCs in individual pieces of work and communicate them in order to guide the refinement of the Engineering / DevOps team,
- Design for scalability and optimize performance, considering load balancing, caching, and resource allocation
Skills (must-have):
- Strong hands-on experience in software and distributed system development and engineering to be able to quickly design and build POCs.
- Hands-on experience with at least one public cloud platform (Kubernetes, Networking, Cloud Storage and Monitoring).
- Profound understanding of the concepts behind the resource hierarchy of public cloud providers as well as the lifecycle of managed services offerings.
- Experience in architecture, design, and development of Kubernetes native operators (kubebuilder) and managing resources in the Kubernetes resource model (CRDs, CRs).
- Development and architecture experience with Cloud-Native technologies, Kubernetes-related tooling and frameworks (including architecture patterns around microservices, brokering (pub-sub), event sourcing, sharding/ partitioning, load throtteling/gateway patterns, performance management).
- Deep understanding of CI/CD workflow and experience with IaC, GitOps tools.
- Experience with Controller Runtime library.
Skills (should-have):
- Experience/ familiarity with test-, behavior-, observability-driven development,
- Understanding of API design, development and migration,
- Understanding of service discovery,
- Experience with designing RBAC and other access control methodologies,
- Proficiency in both speech and writing in German or Ukrainian (at least C1)
For our client we are looking for a Go Developer (f/m/d) with DevOps and Cloud Infrastructure Know-How.
Start: 04.08.2025
Duration: 31.10.2025 (long term engagement (2026))
Capacity: 100% if possible
Location: 75% Remote, 25% Berlin (1 week Berlin / 3 weeks remote in rotation), up to 50% onsite in peak times
Language: English, German is a plus
Team:
The ESL Product Line is responsible for a product portfolio central to the program, consisting of an Infrastructure as a Service Product, a managed Kubernetes Service, a resource management service to facilitate scalable management of platform permissions and a service lifecycle workflow engine enabling. All services together constitute a core part of an on-premise private cloud platform for all business applications of the client, including IT/OT critical applications required for maintaining and operating the infrastructure.
For the whole product portfolio, the product line owns the complete product flow, from product management, architecture, delivery up until Tier 3 operations.
Tasks:
- Development of Go lang based modules for private cloud
- Testing and Debugging: Validation regarding quality and functionality of developed code by means of testing and debugging
- Conducting of Code Reviews
- Maintaining CI/CD Pipeline: Contribution to CI/CD Pipeline Maintenance
Skills (must-have):
- Minimum of 3 years software development experience (in Go Lang, C/C++ or Python) with significant experience in building RESTful services in distributed environments. The development language of the project is Go and applicants
must be willing to use it exclusively for the development of the core components.
- Strong system programming skills, with proficiency in low-level interactions, memory management, and performance optimization.
- Sound understanding of containerization and container management with Kubernetes, packaging of applications and customization of deployments.
- Experience identifying (e.g. by penetration testing) and eliminating software vulnerabilities
- Experience with common hyperscalers (GCP and others).
- Ability to set up and manage CI/CD pipelines using tools like GitLab, Jenkins, Tekton, Argo Workflows, and Argo CD as well as hands on experience with gitOps/IaC (supported by our dedicated DevOps engineers).
- Proficiency in writing and maintaining unit and integration tests and their incorporation in automated test frameworks.
- Deep understanding of networking concepts, including protocols, load balancing, and security.
Skills (should-have):
- Qualifications in IT governance and security