JPMorgan Chase (JPMC) is a leading global financial services firm with assets of $2 trillion and operations in more than 60 countries. It is on the transformation journey to be a client-centric technology driven company over the last few years. With an annual tech budget of $10B+, it has started significantly investing and building in the next generation core infrastructure, data and AI technology. 

The Chief Technology Office within JPMorgan Chase & Co. is responsible for the evolution of modern, efficient development practices and tooling for an optimal engineer experience.

As the next push into this investment, JPMC is hiring the best talents to join the newly-formed Chief Technology Office Enterprise Integration Services Group. We are building the next generation technology that combines JPMC unique data and full service advantage to develop high impact AI applications and platforms in the financial services industry. We are looking for people who are excited about the opportunity.


  • Collaborate with all of JPMorgan’s lines of business and functions to delivery software solutions.
  • Experiment, develop and productionize high quality machine learning models, services and platforms to make huge technology and business impact.
  • Lead a team of engineers to build products and deliver solutions, depending on previous architecture and technical leadership experience.
  • Design and implement highly scalable and reliable data processing pipelines and perform analyses and insights to drive and optimize business result.
  • Minimum Qualifications

  • BS, MS or PhD degree in Computer Science or related quantitative field.
  • Solid programming skills with C/C++, Java, Python or other equivalent languages.
  • ·Experience across broad range of modern data science and analytics tools (e.g., R, SQL, STATA, NoSQL, Hive, Hadoop, Spark, Python);

    ·Experience with big-data technologies such as Hadoop, Spark, SparkML, etc

  • Deep knowledge in Machine Learning, Data Mining, Information Retrieval, Statistics.
  • Expert in at least one of the following areas: Natural Language Processing, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis.
  • Major machine learning frameworks: Tensorflow, Caffe/Caffe2, Pytorch, Keras, MXNet, Scikit-Learn.
  • ·Advanced data mining and EDA (Exploratory Data Analysis) skills

  • Experience in ETL pipelines, both batch and real-time data processing.
  • ·8+ years’ experience with machine learning APIs and computational packages (examples: TensorFlow, LightGBM, PyTorch, Keras, Scikit-Learn, NumPy, SciPy, Pandas, H2O, SHAP, Catboost)

    ·Proficiency in visualization tools such as Looker, Domo, Tableau, Cognos, Power BI, and similar tools required;

  • Strong analytical and critical thinking skills.
  • Self-motivation, great communication skills and team player
  • ·Able to understand various data structures and common methods in data transformation

    ·Excellent pattern recognition and predictive modeling skills

    ·Experience with large imbalanced datasets

    Preferred Qualifications

  • Cloud computing: Google Cloud, Amazon Web Service, Azure, Docker, Kubernetes
  • Experience in big data technologies: Hadoop, Hive, Spark, Kafka
  • Experience in distributed system design and development
  • Experience in practical data processing working with large scale datasets and Big Data environment, data mining, text curation and information retrieval tasks
  • Apply Now

    Send your application to the employer.

    Send your Application