Why join GFT?
You will work with and learn from top IT experts. You will join a crew of experienced engineers: 60% of our employees are senior level.
Interested in the cloud? You will enjoy our full support in developing your skills: training programs, certifications and our internal community of experts. We have strong partnerships with top cloud providers: Google, Amazon and Microsoft - we are number one in Poland in numbers of GCP certificates. Apart from GCP, you can also develop in AWS or Azure.
We are focused on development and knowledge sharing. Internal expert communities provide a comfortable environment where you can develop your skillset in areas such as blockchain, Big Data, cloud computing or artificial intelligence.
You will work in a stable company (32 years on the market) in demanding and challenging projects for the biggest financial institutions in the world.
What will you do?
You will be working with one of the world's leading banks, performing complex data engineering tasks on their centralized Big Data platform, built around technologies such as: Spark, Hive, Presto, Alluxio or Airflow and complete with data governance, metadata management and MLOps solutions. The job gives a real opportunity to work with terabyte-sized datasets and low-latency stream processing jobs, as you will be working with the Data Lake, Data Marts and many Data Warehouses on the platform.
As a Data Engineer you will be involved in two projects:
The project that involves building a state-of-the-art arbitration layer, enabling to elastically compose insights (coming from ML algorithms and standard business logic) from multiple systems, based on either on Business Rules or Machine Learning algorithm, to provide optimized outcomes for each client applications, like Mobile Banking, Online Banking and others. The project will have a hybrid deployment, including both public (AWS) and private cloud.
Working in the second project you will implement a financial planning system to bank customers, which will provide features such as expense classification, budget planning and financial goal tracking, in a deeply personalized fashion, based on Machine Learning insights derived from the transactional and personal data.
- 5+ years experience working as either Software Engineer/Data Engineer
- Hands-on and technical experience with Apache Spark
- Experience with deploying, operating, and debugging big data platforms and tools in Spark.
- Proven track-record of building and operating large-scale, high-throughput, low-latency production systems.
- Proficiency in any one of the high-level languages - Java, Scala, Python along with a fair understanding of data structure, algorithms and their runtime complexities.
- Solid understanding of both object-oriented and functional programming concepts
- Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools and SQL Interfaces
- Experience with Hive, Presto, Alluxio or Airflow
- Working in a highly experienced and dedicated team
- Competitive salary and extra benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.)
- Permanent or B2B contract
- On-line training and certifications fit for career path
- Free on-line foreign languages lessons
- Regular social events (as health of our employees is our priority all events are conducted online at the moment)
- Access to e-learning platform
- Ergonomic and functional working space with 2 monitors (you can also borrow monitors and office chair for your home office)
- Relocation package for candidates from outside of Krakow
GFT21 dni temu
Użyj naszego kreatora CV, w którym znajdziesz gotowe szablony
do uzupełnienia, wskazówki oraz przykłady. Stwórz swoje CV teraz.
Pomagamy znaleźć wymarzoną pracę
Szukasz pracy? Znajdź pasującą ofertę i aplikuj!Przejdź do ofert pracy