The role is part of Global Clearing Technology - Data Engineering Group within Corporate & Investment Banking line of business. You'll join an inspiring and talented team of global technologists dedicated to building new product offerings along with improving the existing application design, development, testing that goes into creating high quality software using the Agile software development practices.
If you are passionate about programming, and excited about experimenting with cutting edge distributed & cloud technologies & looking for challenges that stretches your talents, then this could be a role for you.
As a member of this team you will be contributing towards building our strategic big data application platform and advancing our cloud journey into public (AWS) and private cloud. This role will provide you with an opportunity to develop applications for/using various Big Data technologies like Hadoop, Hbase, Hive, Spark, Kafka, Solr and Cloud Computing Storage, Compute & Analytics services like AWS - S3, Redshift, RDS, MSK, EMR, Athena, Lambda, Kubernetes to name a few.
Develop and implement data pipelines that extracts, transforms and loads data into a data lake that helps the firm in reaching strategic/business goals
Work on ingesting, storing, processing and analyzing large data sets
Create scalable and high-performance web service APIs
Investigate and analyze alternative solutions to data storing, processing etc. to ensure most streamlined approaches are implemented
Object Oriented Concepts & Data Structures
Proficiency in any Programming Language - Java/Scala/Python
Knowledge/Exposure to any AWS services - S3, Redshift, RDS, EMR, MSK, EKS, EC2, Lambda, Athena, Redshift Spectrum
Proficient knowledge of SQL with any RDBMS
Basic understanding of Big Data & Cloud computing
Good analytical and problem solving skills
Strong communication skills (verbal and written) with ability to communicate across teams
Design and development around Apache SPARK and Hadoop Framework
Understanding of RDD and Data Frames with in Spark
Knowledge/Exposure to HDFS, HBase, Hive, Solr, Kafka
Knowledge/Exposure to any container-orchestration platforms like Kubernetes and microservice architecture
J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as any mental health or physical disability needs.
About the Team
Our Corporate & Investment Bank relies on innovators like you to build and maintain the technology that helps us safely service the world's important corporations, governments and institutions. You'll develop solutions that help the bank provide strategic advice, raise capital, manage risk, and extend liquidity in markets spanning over 100 countries around the world.