We develop tailor-made IT systems that boost the businesses of our clients.
In our rich and constantly growing portfolio you can find:
- systems supporting very complex business processes – our own solutions which have revolutionized the businesses and increased the profits of our partners
- our own pioneering products in terms of concept and technology, created by our Research & Development Department
- last but not least: a huge backlog of innovative ideas and concepts which we will soon have turned into working prototypes, and ultimately into profitable start-ups
Currently we are looking for:
Big Data DevOps Engineer
Place of work: Warszawa
to strengthen significantly our team and move forward with our expertise.
If you want to:
- take part in the development of advanced IT systems
- gain valuable experience in designing and implementing complex IT infrastructure used for processing large amount of data
- work with cutting-edge Big Data (and other) technologies
- develop systems that process a huge number of requests per second
- work with the world’s top IT professionals
- carry out projects which address real business challenges
- have a real impact on the projects you work on and the environment you work in
- have a chance to propose innovative solutions and initiatives
it’s probably a good match.
Moreover, if you like:
- flexible working hours
- casual working environment and no corporate bureaucracy
- having an access to such benefits as Multisport and Medicover
- working in modern office in the centre of Warsaw with good transport links
- a relaxed atmosphere at work where your passions and commitment are appreciated
- challenges and many opportunities for development
it’s certainly a good match!
If you join us, your responsibilities will include:
- building, developing and maintaining distributed data processing and data analysis systems
- designing and developing systems’ architecture in cooperation with software developers
- cooperation with data processing security experts
- monitoring, analyzing and optimizing systems in terms of its efficiency
- supervising and monitoring services
- automation of infrastructure and incidents management processes
- network and server troubleshooting
We expect:
- at least 2 years of experience in similar role
- excellent knowledge of Linux (mainly Ubuntu)
- experience in working with networks, servers and services monitoring tools (e.g. Icinga, Check_MK, Zabbix)
- knowledge of virtualization and containers (LXC, Docker, KVM, Proxmox)
- advanced knowledge of continuous configuration automation tools (e.g. Ansible)
- skills in programming using scripting languages (bash, Python)
- skills in practical use of version control systems (Git, GitLab)
- willingness to learn and knowledge sharing
- teamwork skills
- good communication skills
In addition, we will be pleased if you:
- have some knowledge of Hadoop (as well as Cassandra and Kafka) clusters administration
- are familiar with Data Warehouse and Big Data tools (Hadoop, JupyterHub, Spark, Kafka, Hive, ZooKeeper)
- have experience with Kubernetes
- have knowledge of Linux, Java VM and MySQL tuning
- have skills in the administration of servers, load balancers and HTTP reverse proxy (Apache, Nginx, HAProxy)
- know OpenVPN and TLS protocols