Introduce the GCP to an audience of professionals from IT service/systems integration companies - these are a very different target audience than professionals from product technology companies who are very well served by Google’s existing GCP courses.
Resolve the chicken-and-egg problem facing employees at IT service companies: they will not gain hands-on GCP experience unless they are assigned to client projects on the GCP, and they will not be assigned to such client projects unless they master the GCP in the first place.
Relate the GCP to concepts and technologies the audience already understands: The target audience member will likely possess a great breadth (not depth) of knowledge about competing cloud offerings such as AWS or Azure, about Hadoop ecosystem technologies such as Hadoop, Hive, HBase, Spark, Pig as well as popular proprietary offerings such as Teradata and Oracle. This course aims to clearly articulate how the GCP is a superior alternative to these - and in this way, sticks closely to technologies and terms they audience is already familiar with.
No labs - focus on concepts - Introduce from first-principles: Professionals in IT services/system integration firms need more help with the ‘why’, especially since they often do not possess majors in Computer Science or Computer Engineering. They are relatively comfortable with the ‘how’ - specific actions and how to perform them. Being self-taught, they might need quick refreshers around CS or CE concepts, and might have gaps in their knowledge that the course can easily fill.
Less is more: Don’t overwhelm the audience with more information than they can process. Go deep on what really matters, but on other topics leave them with only what they really need to remember later.
Adjust mechanics to fit the Indian context: Realities of traffic, commuting and organisational expectations particularly in an Indian context (e.g. hard for attendees to completely shut off from day-to-day work).
Audience
Please refer objective.
Prerequisites
Ideally, participants should be employed in IT services companies, and not in product development companies.
Participants should be employed in IT or ITES or Systems Integration companies in roles such as
Infra Administration
Hadoop administration
Big data delivery
Pre-sales
Solution Architecture teams
Machine learning CoEs (Centers of Excellence)
Master Data Management and Migration
Content
Participants can span a wide range of experience and job function
Decision-makers - technical and non-technical - responsible for evaluating cloud solutions
Support professionals - who are responsible for servicing user request tickets on Hadoop ecosystem tools
Big Data Architects who must design and build scalable, resilient and cost-effective apps on the cloud
Data Scientists who understand ML, particularly Spark MLLib,but not Tensorflow or deep learning
Big Data Developers looking mostly to redesign existing apps for the Google Cloud (not so much learning to build native apps on the cloud)
Business Analysts and Pre-sales professionals who must map system and user requirements to specific cloud offerings
Optional - Previous experience with the following technologies would help, but is not required o Some experience with public cloud products such as AWS or Azure
Some experience with Spark, Hadoop, Hive, Teradata or Oracle
What’s definitely not required
No prior GCP experience required
No prior Python or Java programming experience required