walgreen 那个 5 off nature made 钙的 coupon 可以用吗?# PennySaver - 省钱一族
s*a
1 楼
公司在280和92交界的地方, 我们组招Hadoop Engineer. 貌似不会考算法. 至今为止
递简历的全是烙印,如果有人感兴趣,请站内联系.
下面是job description
Hadoop Engineer
Location: San Mateo, CA, USA (headquarters) San Mateo, CA
Job Description
We are looking for a solid Hadoop engineer to join our team. Your role will
be to help us develop Big Data applications that have an impact on our
customers. The role of the Hadoop Engineer will be to develop ETL data
pipelines and to work with Internal teams to deploy using internal DevOps
tools. The ideal candidate will have a solid background working with
distributed Big Data technologies, SQL based data bases and BI tools. You
should have a combination of coding skills, design skills, and communication
skills that will significantly enhance the capacity and effectiveness of
our global team.
Responsibilities:
Design and implement data integrations between big data systems, data bases
and other internal/external business systems
Develop and implement data models that use advanced statistical and machine
learning techniques.
Help manage the Hadoop cluster and other BI infrastructure.
Partner with DevOps teams to create new processes and programs for managing
the Continuous Development and Continuous Integration of ETL Pipelines and
BI solutions into our products.
Partner with marketing and product management to create reports and
dashboards to support up market growth.
Partner with Operations and Engineering to create reports and dashboards on
the health of our infrastructure.
Partner with finance to create reports and dashboards on the companies
finances.
Partner with Sales to to create reports and dashboards to help understand
our sales processes.
Provide in-depth troubleshooting for all reporting issues and quickly drive
resolutions.
Gather both business and technical requirements to continuously enhance
dashboards and reports.
Analyze and improve performance, scalability, and long-term stability of
overall BI systems.
Contribute to the design and development of new reporting features
Rquired Qualifications:
Strong Linux system administrator skills
Knowledge and experience with distributed Big Data Technology like Hadoop or
NoSQLDB: MongoDB/Cassandra/Vertica etc
Familiar with one or more machine learning or statistics tools such as R,
Matlab, and etc. or various libraries for other programming languages
RDBMS: Oracle/MySQL/MSSQL Server, etc.
Strong Knowledge of Agile methodologies.
Strong Knowledge and Experience in DevOps and their processes
Expert understanding of ETL principles and how to apply them
Be able to write java code, and basic coding/scripting ability in Java, Perl
, Ruby, C#, and/or PHP
Experienced with Linux system monitoring and analysis
Experience managing full application stacks from the OS up through custom
applications
Networking (DNS, TCP/IP)
Technology-related Bachelors degree or equivalent work experience
Excellent oral and written communication skills
Strong multi-tasking skills
Strong analysis and troubleshooting skills and experience
Self-starter who is excited about learning new technology
Experience with Devops tools like Chef or Puppet is a plus.
Experience with BI tools like Tableau, Qlikview, Microstrategy, Birst,
Hyperion etc. is a plus
Experience with data integration tools like Talend, Pentaho, Syncsort or
others is a plus
递简历的全是烙印,如果有人感兴趣,请站内联系.
下面是job description
Hadoop Engineer
Location: San Mateo, CA, USA (headquarters) San Mateo, CA
Job Description
We are looking for a solid Hadoop engineer to join our team. Your role will
be to help us develop Big Data applications that have an impact on our
customers. The role of the Hadoop Engineer will be to develop ETL data
pipelines and to work with Internal teams to deploy using internal DevOps
tools. The ideal candidate will have a solid background working with
distributed Big Data technologies, SQL based data bases and BI tools. You
should have a combination of coding skills, design skills, and communication
skills that will significantly enhance the capacity and effectiveness of
our global team.
Responsibilities:
Design and implement data integrations between big data systems, data bases
and other internal/external business systems
Develop and implement data models that use advanced statistical and machine
learning techniques.
Help manage the Hadoop cluster and other BI infrastructure.
Partner with DevOps teams to create new processes and programs for managing
the Continuous Development and Continuous Integration of ETL Pipelines and
BI solutions into our products.
Partner with marketing and product management to create reports and
dashboards to support up market growth.
Partner with Operations and Engineering to create reports and dashboards on
the health of our infrastructure.
Partner with finance to create reports and dashboards on the companies
finances.
Partner with Sales to to create reports and dashboards to help understand
our sales processes.
Provide in-depth troubleshooting for all reporting issues and quickly drive
resolutions.
Gather both business and technical requirements to continuously enhance
dashboards and reports.
Analyze and improve performance, scalability, and long-term stability of
overall BI systems.
Contribute to the design and development of new reporting features
Rquired Qualifications:
Strong Linux system administrator skills
Knowledge and experience with distributed Big Data Technology like Hadoop or
NoSQLDB: MongoDB/Cassandra/Vertica etc
Familiar with one or more machine learning or statistics tools such as R,
Matlab, and etc. or various libraries for other programming languages
RDBMS: Oracle/MySQL/MSSQL Server, etc.
Strong Knowledge of Agile methodologies.
Strong Knowledge and Experience in DevOps and their processes
Expert understanding of ETL principles and how to apply them
Be able to write java code, and basic coding/scripting ability in Java, Perl
, Ruby, C#, and/or PHP
Experienced with Linux system monitoring and analysis
Experience managing full application stacks from the OS up through custom
applications
Networking (DNS, TCP/IP)
Technology-related Bachelors degree or equivalent work experience
Excellent oral and written communication skills
Strong multi-tasking skills
Strong analysis and troubleshooting skills and experience
Self-starter who is excited about learning new technology
Experience with Devops tools like Chef or Puppet is a plus.
Experience with BI tools like Tableau, Qlikview, Microstrategy, Birst,
Hyperion etc. is a plus
Experience with data integration tools like Talend, Pentaho, Syncsort or
others is a plus