- Prepare, clean, and transform data to support analytics, reporting, and data-driven decision-making
- Develop, optimize, and manage data pipelines for collecting, storing, transforming, and distributing data across systems
- Manage and integrate various databases, ensuring seamless access to structured and unstructured data
- Design, build, and maintain scalable data infrastructure and tools that facilitate data analysis and machine learning processes
- Work closely with cross-functional teams, including data scientists, analysts, and engineers, to deliver actionable insights and data solutions
- Perform initial analysis of data, generating reports, visualizations, and insights to inform business decisions and identify trends
- Engage in ongoing skills development and stay updated with the latest industry trends, tools, and technologies
- 4+ years of professional experience
- DB - advanced MS SQL, PostgreSQL - required
- Extensive experience in T-SQL for both developing new DB functionality and maintaining existing
- Performance optimization of SPs, complex views
- Deep understanding of DWH/ETL and BI concepts
- Good experience in Azure Data Factory (creating, maintaining and performance optimization of pipelines)
- Experience of work with data lakes
- Experience with Azure Databricks is required
- B1+ level of English
The client is from the UK and helps operator/carrier clients offer new loan and lease finance products leveraging securitization technology. The project is in FinTech & Telecom. It is a complex project built in microservices with a huge infrastructure.
The headcount of the engineering team is around 100 people, and it is constantly growing.
At Vention, we assemble senior-level, dedicated teams of developers to help fast-growing startups and innovative enterprises drive impact and achieve their goals. We’ve delivered solutions across multiple domains, including FinTech, PropTech, AdTech, HealthTech, e-commerce, and more.
Our Data team works with clients to create data platforms from scratch or modify and update existing platforms. The tech stack depends on the project, but we mainly use Spark (along with Scala, Python, or Java) – as well as Apache Kafka, Apache Cassandra, Apache Hadoop, Apache Parquet, and AWS.
Internal knowledge transfer activities are conducted within the Data Engineering Family (which includes data practice & data competency) – it is a space for all of our specialists to share their experiences, learn new skills, host meetups, mentor others, and more.
Our culture is rooted in the belief that ongoing growth benefits employees and the company alike. Because of that, we offer:
- An individualized approach to career development, tailoring growth plans to every role
- Access to our technology mentorship program as a mentor or mentee
- The opportunity to contribute to up to 300 original projects in 30 different fields
And that's not all! We also offer:
- Private health insurance (medical and dental care services included)
- 25 paid working days/year for work-life balance
- Support for the significant events in your life
- Access to the Multisport program for a healthy lifestyle
- And referral bonuses
Engineer your success!