Business Intelligence Engineer
About Paidy, Inc.
Paidy is Japan's pioneer and leading BNPL service company. At Paidy we believe in creating simple, instant experiences to take the hassle out of shopping with a touch of magic.
Paidy offers instant, monthly-consolidated credit to consumers by removing hassles from payment and purchase experiences. Paidy uses proprietary models and machine learning to underwrite transactions in seconds and guarantee payments to merchants. Paidy increases revenue for merchants by reducing the number of incomplete transactions, increasing conversion rates, boosting average order values, and facilitating repeat purchases from consumers.
Paidy has reached an agreement to join PayPal, the global payments company. Paidy will continue to operate its existing business, maintain its brand and support a wide variety of consumer wallets and marketplaces by providing convenient and innovative services.
Paidy continues to innovate to make shopping easier and more fun both online and offline. For more information, please visit http://www.paidy.com.
About Position
Our division is pivotal in shaping risk policies to the company vision. We are responsible for developing and deploying machine learning models that predict risk and enable real-time payment underwriting. Our division comprises data scientists and data engineers who are building a cutting-edge risk assessment engine using a modern tech stack that includes AWS Glue, SageMaker, Apache Spark, Prefect, and Looker. Our infrastructure includes a growing array of ETL pipelines, data marts, feature stores, and models designed to address risk and business problems.
We are seeking a new team member with expertise in data modeling, feature engineering, and business intelligence. This role involves building data assets to represent the company’s unit economics, customer credit quality, fraud patterns, and overdue debt recovery processes. Your work will help develop more sophisticated policies and models to address business challenges effectively.
Key Role and Responsibilities
- Data Modeling & Feature Engineering: Understand the data needs of the business and the key metrics used for analytics and reporting. Design and implement new ways to model the data to calculate these metrics in a more efficient and flexible way .
- ETL Pipeline Development: Develop and maintain ETL pipelines to update the tables which record these metrics for use in business intelligence and machine learning applications. Ensure this data is accurately and efficiently processed and stored. Work with the team to ensure data integrity and accessibility across various platforms.
- Enable Smarter analytics: Work with the risk team to enable easier development and maintenance of key reports and dashboards by using tools such as Looker to provide flexible views from which data can be explored.
- Boost Machine Learning Capability: Support the ML data scientists in developing predictive models by building batch ML pipelines using a combination of ETL and Sagemaker jobs to train/predict models and capture the output for use in analytics/reporting.
- Cross-Functional Collaboration: Work closely with data scientists, data engineers, product managers, and other stakeholders to align data initiatives with business objectives. Facilitate the seamless integration of new data assets and models into operational processes.
Skills and Requirements
- Stakeholder Collaboration: Enthusiasm for working with business stakeholders and data scientists to deliver tangible and visible business value. Proactive in taking ownership of projects, building innovative solutions independently.
- Industry Experience: Experience in financial services, payment services, or fraud prevention is a plus, but not required. A strong interest in these topics is essential.
- Technical Proficiency:
- Metrics Translation: Skilled in translating industry-specific metrics and definitions into well-documented, performant code.
- Spark Development: Experience building production Spark applications for batch ETL pipelines as well as processing terabyte-scale data with efficient SQL (For Spark, Scala preferred, PySpark acceptable with willingness to learn Scala)
- Job Orchestration: At least 2 years of experience building ETL pipelines using job orchestration tools like Airflow or Prefect.
- Data Integration: Experience in creating data marts or sources utilized by data scientists, data analysts, and business end users.
- Business Intelligence Tools: Experience in building data marts for use in BI applications such as Looker, Tableau, or PowerBI.
- Paidy team will ask about your user experiences with Paidy Apps during the interview. Please download Paidy App and try it out!
- iOS: https://apps.apple.com/jp/app/paidy/id1220373112
- Android: https://play.google.com/store/apps/details?id=com.paidy.paidy&hl=en&gl= US
For those who are not able to download Paidy App, due to the regional restrictions, please be advised that you download the similar App, such as Klarna, Afterpay, Affirm and so forth, and come up with your opinions on these applications and services.
- Please note that you must be eligible to work in Japan.
What We Offer You
- Diversified team with 238+ colleagues from 42+ countries
- Exciting work opportunities in a rapid-growing organization
- Cross-functional collaboration
- Flexible work-from-home arrangement
- Competitive salary and benefits
Paidy Values
Be a winner / 勝ちにこだわる
- Beat expectations / 常に期待値を超える
- Display surprising speed / 人をスピードで驚かす
- Embrace risk / リスクを恐れない
Own it and deliver / 結果を出す
- Commit to what, when and how to deliver/ 目的・やり方・期限にコミットする。
- Own the actions to deliver / 結果のためのアクションにこだわる
- Embrace conflict when needed to deliver results / 必要なら対立・衝突も恐れない
Play an integral role / 大切なピースになる
- Make an irreplaceable contribution to our business / 替えの効かない貢献をする
- Embrace and bridge differences in language and culture / 皆が言語と文化の架け橋になる
- Raise the bar / スタンダードを上げ続ける
Apply for this job
*
indicates a required field