The phone really is a Swiss army knife. It is a phone, camera, mp3 player -- and a million other things when you consider all the apps that you can download -- the flashlight app can come in very useful. The camera is used for lots of things: snaps, video, virtual reality, car safety apps. MyRuns1 uses the camera in a simple manner -- just to take a photo for the user profile. Apps can easily fire the camera app to take a photo and then manipulate the image that is returned, such as, cropping a thumbnail version of the picture. In this lecture, we discuss how to use the camera and crop apps.
Also importantly we need to save app data, so this lecture also includes pointers on that:
The demo code used in this lecture include:
Many applications want to take a photo. We will discuss how an app can launch the camera, the user can take a photo and the control returns back to the app that invoked the camera. You will do this for the MyRuns Lab 1.
The app works as follows. The first free images are: 1) after the app is first launched; 2) when SnapNow is selected; 3) when the camera app is launched.
The next set of screen dumps show the remainder of the operations in the app workflow: 4) after the photo is take; 5) when the crop app is launched; 6) after the image has been cropped and inserted by replacing the existing photo.
The program allows the user to replace the existing picture if it exists. Note, the program as a better design in terms of helper functions for save_snap() and load_snap(). These help functions load the photo and render it on the UI if it already exists and save a photo it one is taken so it is rendered in the UI next time the app runs.
The code also has a better way for other activities to return to the calling activity. For example onActivityResult() is the return callback for the camera app after taking a photo and from crop app after cropping. This is a clean design.
Similarly, we handle dialogs more cleanly. Because there are many dialogs in the MyRuns application we start to build up a common way to handle them in MyRunsDialogFragment which can be extended to handle different type of dialogs. Again, this is a clean design where all dialogs are managed in a centralized manner.
Android requires you to do a number of things programmatically to access senstive resources such as the camera, GPS, contacts. Apps need to explicitly call out permissions in manifest as well as at run time asking the user for permission. Note that in the manifest we have to be granted permission to use the camera app -- see snippet below. The user has to give permission to use certain resources such as the camera and writing to storage. You need to update the manifest, and also request permissions and check the response from the user at runtime. Example code is included in the demo app and snippets shown below. See the course book section on Requesting Permission on page 580 and Android developer Request App Permissions
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Next, we need to ask the user for the permission to use the camera.
private void checkPermissions() {
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, REQUEST_CODE_PHOTO_PERMISSION);
}
}
After the user reponds to request for permission the system calls onRequestPermissionsResult() as shown below. Your app has to override that method to find out whether the permission was granted. The callback is passed the same request code you passed to requestPermissions(). For example, if an app requests using the CAMERA access it might have the following callback method:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == REQUEST_CODE_PHOTO_PERMISSION) {
if (permissions[0].equalsIgnoreCase(Manifest.permission.CAMERA)
&& grantResults[0] == PackageManager.PERMISSION_GRANTED)
// user gave permission to use the camera, so do something like take a photo
startActivityForResult(takePictureIntent, REQUEST_CODE_TAKE_PHOTO_FROM_CAMERA);}
} else {
// user did not give permission to use the camera
}
}
The onCreate() code sets of the view, asks for user permission, gets a reference to the current image and retrieves the current image capture uniform resource identifier (URI) if it has been saved by the activity when the app exited and called onSaveInstanceState(). Uri is a string of characters used to identify a name or a web resource
public class CameraControlActivity extends FragmentActivity {
public static final int REQUEST_CODE_TAKE_FROM_CAMERA = 0;
private static final String URI_INSTANCE_STATE_KEY = "saved_uri";
private Uri mImageCaptureUri;
private ImageView mImageView;
private boolean isTakenFromCamera;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.profile);
checkPermissions();
mImageView = (ImageView) findViewById(R.id.imageProfile);
if (savedInstanceState != null) {
mImageCaptureUri = savedInstanceState
.getParcelable(URI_INSTANCE_STATE_KEY);
}
loadSnap();
}
@Override
protected void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
// Save the image capture uri before the activity goes into background
outState.putParcelable(URI_INSTANCE_STATE_KEY, mImageCaptureUri);
}
private void checkPermissions() {
if(Build.VERSION.SDK_INT < 23) return; if (checkSelfPermission(Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED
|| checkSelfPermission(Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
requestPermissions(new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0);
}
}
The activation of the camera is via implicit intents. Next, we will discuss how to implement this in the code. You probably are getting use to this now but if you want to start another activity you need to fire an intent -- and that is exactly what we do below. We create an intent using MediaStore.ACTION_IMAGE_CAPTURE action.
We have now used two different types of intents called Implicit and explicit intents. We have used explicit intents numerous times to start activities such as shown below where we set the intent up to explicitly start a particular activity by naming the target activity
Intent intent = new Intent(MainLayoutActivity.this, LinearLayoutActivity.class);
startActivity(intent);
The second type of intents are implicit intents, which we will use in this lecture. Implicit intents do not name a target but the action, as shown below. Implicit intents are often used to activate components in other applications. For example:
Intent i = new Intent(Intent.ACTION_VIEW);
i.setDataAndType(Uri.fromFile(output), "image/jpeg");
startActivity(i);
The intent only ACTION_VIEW the action. The system resolves to start the activity capable of handling the action without explicitly starting a browser or gallery. The Android system tries to find an app that is capable of performing the requested action in the implicit intent. It does this by considering the requested action, the data that has been passed in the intent (e.g., JPEG file) -- it uses intent filters that consider -- the action, data and category.
One issue with implicit intents is that you have no control over the app; for example in the case of the camera or the gallery; you fire it and hope for the best. This is a limitation of the approach.
The steps needed to take a photo and store it in a file are as follows:
The onPhotoPickerItemSelected() callback constructs the intent for the camera app as in the first example project using the ACTION_IMAGE_CAPTURE action -- this is an implicit intent. Again, it does not identify the application that meets the action, it just says to the Android system -- need a snap, figure it out for me.
Because we are taking a photo and later cropping it we set up a temporary path and name to save the photo as we work on it using the and store it in private data (to this activity) mImageCaptureUri (crop later access this). We construct a temporary file based on time and extension jpg. Question: can you tell me where on your phone these photos are stored? The mImageCaptureUri is passed in the intent to the camera app, as is the case with first camera app. Then the camera app is implicitly started using startActivityForResult(). Once the user has taken a picture and clicked the tick icon then control will return to startActivityForResult() with REQUEST_CODE_TAKE_FROM_CAMERA as an input parameter.
public void onPhotoPickerItemSelected(int item) {
Intent intent;
switch (item) {
case MyRunsDialogFragment.ID_PHOTO_PICKER_FROM_CAMERA:
// Take photo from camera
// Construct an intent with action
// MediaStore.ACTION_IMAGE_CAPTURE
intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
// Construct temporary image path and name to save the taken
// photo
ContentValues values = new ContentValues(1);
values.put(MediaStore.Images.Media.MIME_TYPE, "image/jpg");
mImageCaptureUri = getContentResolver() .insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
intent.putExtra(android.provider.MediaStore.EXTRA_OUTPUT,
mImageCaptureUri);
intent.putExtra("return-data", true);
try {
// Start a camera capturing activity
// REQUEST_CODE_TAKE_FROM_CAMERA is an integer tag you
// defined to identify the activity in onActivityResult()
// when it returns
startActivityForResult(intent, REQUEST_CODE_TAKE_FROM_CAMERA);
} catch (ActivityNotFoundException e) {
e.printStackTrace();
}
isTakenFromCamera = true;
break;
default:
return;
}
}
Two important callbacks are shown in the code below. When the user selects the SnapNow as shown above onChangePhotoClicked(View v) is called. This kicks the main workflow of that is shown in the series of images above. The displayDialog() method creates a new dialog fragment and then onCreateDialog() is called -- see the discussion at the end of the notes on this; but, in summary: - onCreateDialog() creates a customized dialog and presents it to the user as shown the workflow set of screen dumps. - onCreateDialog() creates an onClickListerner for the dialog and the onClick() (see below) is called. once the user selects the dialog "Take picture from camera" the next step proceeds. - onClick() calls onPhotoPickerItemSelected(item) above that kicks of taking the actual photo.
public void onSaveClicked(View v) {
// Save picture
saveSnap();
// Making a "toast" informing the user the picture is saved.
Toast.makeText(getApplicationContext(),
getString(R.string.ui_profile_toast_save_text),
Toast.LENGTH_SHORT).show();
// Close the activity
finish();
}
public void onChangePhotoClicked(View v) {
// changing the profile image, show the dialog asking the user
// to choose between taking a picture
// Go to MyRunsDialogFragment for details.
displayDialog(MyRunsDialogFragment.DIALOG_ID_PHOTO_PICKER);
}
public void displayDialog(int id) {
DialogFragment fragment = MyRunsDialogFragment.newInstance(id);
fragment.show(getSupportFragmentManager(),
getString(R.string.dialog_fragment_tag_photo_picker));
}
The next part of the steps for taking the photo are implemented as part of the onActivityResult()call back. Here your app receives a callback and data from the camera intent. The file is stored in the data/DCIM on your phone. Run the app and the use Android explorer or the File Manager on the phone to view the file. If you use the Android explorer you need to drag and drop the file to your desktop (i.e., copy it over) before looking at it. Note there are two possible execution paths in onActivityResult():
// Handle data after activity returns.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode != RESULT_OK)
return;
switch (requestCode) {
case REQUEST_CODE_TAKE_FROM_CAMERA:
// Send image taken from camera for cropping
beginCrop();
break;
case Crop.REQUEST_CROP:
// Update image view after image crop
handleCrop(resultCode, data);
// Delete temporary image taken by camera after crop.
if (isTakenFromCamera)
File f = new File(mImageCaptureUri.getPath());
if (f.exists())
f.delete();
}
break;
}
}
To private helper functions support the code for loading the photo from internal storage and committing any changes to the image that have taken place in the execution of the app. More specifically: - If the user clicks "Save" as shown in the workflow pictures then onSaveClicked() is called and saveSnap() commits all changes to the imageProfile currently rendered in the view and saves the image to internal storage in a file called profile_photo.png. - when the application is started or restarted onCreate() will call loadSnap() to load the current picture from file (profile_photo.png) and render the photo to mImageView; recall mImageView = (ImageView) findViewById(R.id.imageProfile).
private void saveSnap() {
// Commit all the changes into preference file
// Save profile image into internal storage.
mImageView.buildDrawingCache();
Bitmap bmap = mImageView.getDrawingCache();
try {
FileOutputStream fos = openFileOutput(
getString(R.string.profile_photo_file_name), MODE_PRIVATE);
bmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
private void loadSnap() {
// Load profile photo from internal storage
try {
FileInputStream fis = openFileInput(getString(R.string.profile_photo_file_name));
Bitmap bmap = BitmapFactory.decodeStream(fis);
mImageView.setImageBitmap(bmap);
fis.close();
} catch (IOException e) {
// Default profile photo if no photo saved before.
mImageView.setImageResource(R.drawable.default_profile);
}
}
If you are designing an app like MyRuns that uses dialogs all over the code it is better to provide a common dialog services that can be used to build, show and return user input from dialogs. MyRunsDialogFragment() handles all the customized dialog boxes in our project -differentiated by dialog id used but the MyRunsDialogFragment constructor. This is all accomplished by extending the simple MyRunsDialogFragment fragment shown below which extends the DialogFragment class . The code is self-explanatory:
When the user touches the dialog "take picture from camera" the onChangePhotoClicked() method calls displayDialog() to display the dialog and trigger the workflow shown in the workflow diagram.
public void displayDialog(int id) {
DialogFragment fragment = MyRunsDialogFragment.newInstance(id);
fragment.show(getSupportFragmentManager(),
getString(R.string.dialog_fragment_tag_photo_picker));
}
public class MyRunsDialogFragment extends DialogFragment {
// Different dialog IDs
public static final int DIALOG_ID_PHOTO_PICKER = 1;
// For photo picker selection:
public static final int ID_PHOTO_PICKER_FROM_CAMERA = 0;
private static final String DIALOG_ID_KEY = "dialog_id";
public static MyRunsDialogFragment newInstance(int dialog_id) {
MyRunsDialogFragment frag = new MyRunsDialogFragment();
Bundle args = new Bundle();
args.putInt(DIALOG_ID_KEY, dialog_id);
frag.setArguments(args);
return frag;
}
@Override
public Dialog onCreateDialog(Bundle savedInstanceState) {
int dialog_id = getArguments().getInt(DIALOG_ID_KEY);
final Activity parent = getActivity();
// Setup dialog appearance and onClick Listeners
switch (dialog_id) {
case DIALOG_ID_PHOTO_PICKER:
// Build picture picker dialog for choosing from camera or gallery
AlertDialog.Builder builder = new AlertDialog.Builder(parent);
builder.setTitle(R.string.ui_profile_photo_picker_title);
// Set up click listener, firing intents open camera
DialogInterface.OnClickListener dlistener = new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int item) {
// Item is ID_PHOTO_PICKER_FROM_CAMERA
// Call the onPhotoPickerItemSelected in the parent
// activity, i.e., ameraControlActivity in this case
((CameraControlActivity) parent)
.onPhotoPickerItemSelected(item);
}
};
// Set the item/s to display and create the dialog
builder.setItems(R.array.ui_profile_photo_picker_items, dlistener);
return builder.create();
default:
return null;
}
}
}
Android has a number of options to store the apps data. Take a look at the developer notes for more details. MyRuns1 will require you to save the profile photo to a file.
The options for storage in Android can be summarized as:
We will use all of these options when building out the complete MyRuns app.
Also read saving files.
For example you can use openFileOutput(String name, int mode) to open a private file associated with this Context's application package for writing. This is a good place to store private data associated with an app. Consider the following snippet. The file profile_file.png is saved to /data/data/edu.dartmouthcs.myruns1/files. You may only get access to the file if rooted.
try {
if (mProfilePictureArray != null) {
Log.d(TAG, "Save photo in ByteArray to profile_photo.png");
FileOutputStream fos = openFileOutput(
getString(R.string.profile_photo_file_name),
MODE_PRIVATE);
fos.write(mProfilePictureArray);
// from the code - path of profile_photo.png
// is/data/data/edu.dartmouthcs.myruns1/files
String path = this.getFilesDir().getAbsolutePath();
Log.d(TAG, "path of profile_photo.png is" + path);
fos.flush();
fos.close();
}
} catch (Exception ioe) {
ioe.printStackTrace();
}