An administrator is deploying Spark on Amazon EMR for two distinct use cases: machine learning
algorithms and ad-hoc querying. All data will be stored in Amazon S3. Two separate clusters for each
use case will be deployed. The data volumes on Amazon S3 are less than 10 GB.
How should the administrator align instance types with the cluster’s purpose?
A. Machine Learning on C instance types and ad-hoc queries on R instance types
B. Machine Learning on R instance types and ad-hoc queries on G2 instance types
C. Machine Learning on T instance types and ad-hoc queries on M instance types
D. Machine Learning on D instance types and ad-hoc queries on I instance types
What would be the right answer? Any pointers from the documentation?