The opportunities of Core ML are impressive, especially when supported by talented developers’ work, as Core ML allows for cool apps with integrated machine learning available for all Apple devices. But, similarly to ARKit, this framework needs lots of amendments to successfully boost user experience. The first and the most important point is that Core ML isn’t open source (while most ML tools are open), so developers cannot use it freely and adjust it to their apps. Also, as Core ML isn’t fit for integrated learning and retaining of the models, the data for training must be entered manually. In future, it may cause inconveniences for the company, and as for now, we will see which features will be added and which will be ignored.
Core ML supports the following models: tree ensembles, neural networks, support vector machines and regression (linear/logistic). Core ML has the chance to make developers of iOS apps consider machine learning and deep learning as the most important features or bases for future apps. Apple announced Core ML will become a machine learning tool available for everybody, and probably, they have told us the direction of company’s technologies to come.
Models for working
The Core ML Model is really the main part of the Core ML framework. Apple kindly provides 5 different and complete Core ML models for app developers to encourage them to make apps. The names of these models are ‘Places205-GooLeNet’, ‘Inception V3’, ‘ResNet50’, ‘SqueezeNet’ and ‘VGG16’. Just because Core ML functions are powered by the devices, not by cloud servers, the general memory footprints of such models are small. In contrast to the regularly-supported models, the new API supports other ML tools like: Caffe, Keras, libSVM or XGBoost.
The Core ML automatically determines if a model should be run on the GPU or the CPU of the device. Also, because everything is on the device, the network connection quality does not influence the performance and quality of work of ML-based applications.
Continue reading about Technologies