Computer recognition of American Sign Language (ASL) is a computationally intensive task. This research investigates transcription of static ASL signs on a consumer-level mobile device. The application provides real-time sign to text translation by processing a live video stream to detect the ASL alphabet as well as custom signs to perform tasks on the device. The chosen classification algorithm uses Locality Preserving Projections (LPP) as manifold learning along with Support Vector Machine (SVM) multi-class classification. The algorithm is contrasted with and without cloud assistance. In comparison to the local mobile application, the cloud-assisted application increased classification speed, reduced memory us-age, and kept the network usage low while barely increasing the power required.