9 Machine Learning Projects To Automate Machine Learning

Αναρτήθηκε στις - Τελευταία τροποποίηση στις

Machine learning projects are favorably accepted, as they were either the pioneers to providing specific niche services, or they have provided a large range of required services to users. Despite there being many projects, what will work best for you depends upon your machine learning goals - and also on the ecosystem you work in. It is possible the projects you are considering may differ, but they all have the same feature of providing services to a massive number of users. Besides the big machine learning projects, there are several smaller projects that are quite popular, as they provide both flexible and niche services for a smaller number of users.

Machine learning is quite expensive. Even if you have all you need, like the toolkit, skills, hardware and data, there is still a process involved in fabricating and fine tuning the machine tooling model. If the aim of machine tools is to automate tasks done previously by humans, it would be better to use machine learning to remove the drudgery from machine learning itself.

Since automated machine learning is in its initial stage, it should not be implemented like diverse or disconnected technology. But it would be best to make it available to business users after producing it.

Below we have presented 9 machine learning projects to automate machine learning. This does not mean you have to try all of them. It is advisable to first check out what is offered in the project, and how it is implemented and built into the ecosystem.

Auto-Sklearn

Auto-Sklearn was developed at the University of Freiburg by a group of people dealing in Machine Learning for Automated Algorithm Design. Auto-Sklearn is automated machine learning for Scikit-Learn projects, where it frees the user from hyperparameter tuning and algorithm selection. It comes with several estimator functions, or methods for learning from the given data where the Auto-Sklearn project will help to select the right estimator. Along with this, there are many merits of Auto-Sklearn when it comes to meta-learning, ensemble construction and Bayesian optimization.

Although manual intervention is required in Auto-Sklearn, the user can decide to set limits on the amount of time and memory used for the tuning process. But these choices can be made easily, while the rest can be left for the machine to decide rather than keep messing around with hyperparameters and model selections.

Auto-Weka

Auto-Weka is a similar machine learning project to Auto-Sklearn for those machine learners who are using Java alongside the Weka machine learning package.  In fact, Auto-Sklearn has been inspired by the success of Auto-Weka. Moreover, Auto-Weka - using an automatic approach - can simultaneously select a learning algorithm with parameters. This goes further than the previous methods, that addressed such issues in isolation. It is estimated that Auto-Weka using Javascript will be helpful for non-expert users to improve their performance, by effectively identifying machine learning algorithms and hyper parameters.

H2o Driverless AI

H2o Driverless AI is another tool that makes machine learning easy for beginners or non-experts. It has been designed for users who have knowledge of products like Tableau. They want to simply get the data without learning the nitty-gritty of machine learning algorithms. This tool assists employees who have no knowledge of the technical side of business by preparing data, improving parameters and determining the maximum algorithms for dealing with particular business problems.

This is also a web-based UI where the user selects one or more variables in the dataset and the answer is given by the system. The results are explicitly explained with the help of annotations and interactive charts. Driverless AI is different from Prodigy as it is a proprietary product, being open source. It goes a step further by automating difficult decisions that are required while preparing a model.

Prodigy

Machine learning systems are built from code and data. The code can be reused while it is difficult to reuse the data, so annotation is done by AI. Prodigy is a labour-intensive annotation tool of supervised machine learning, where the metadata is created by hand to annotate and describe the data used. This process cannot be completely automated, but machine learning can certainly speed up the process.

Prodigy uses a web interface, making the training process fast and automatic for models requiring annotated datasets. Annotations that are added to datasets are helpful in guiding future annotations, while simultaneously accelerating the whole annotation process. Python is used as a machine learning tool by Prodigy, where Python modules are used for teaching models, investigating annotated datasets, managing and testing the results.

Annotation and training are thus efficiently addressed by Prodigy. It allows data scientists to conduct their own annotations where ideas can be tested faster. Prodigy is further expected to reduce expenditure for large projects.

Google’s AutoML

Google has recently come up with two projects, ‘AutoML’ being one of them. These have been designed to automate multi-layered deep learning model designs. The designing network procedure is time consuming even for those with a deep knowledge of machine learning. So instead of humans going through many deep learning network designs, it is best to leave it to AutoML that uses a reinforcement learning algorithm to study innumerable possible networks. With feedback from various networks, the training mechanism can decide on the model that will be the most productive, and yield the best results.

Google’s Vizier

Google named its second project ‘Google Vizier’ in a paper published just a couple of months ago for Black-Box Optimization. The best operating parameters are found for such cases where it becomes difficult to correlate between feeding the parameters and getting the results. Vizier will be used to study the improvement in services by tweaking behaviors. Google Vizier is used for optimization of machine learning models, along with providing core capability to ‘Hyper Tune’ the subsystem of Google’s cloud machine learning. This advanced machine learning system automatically tunes the algorithms across Alphabet Google’s parent company.

Although Vizier is presently only for Google’s internal use, it is expected that Google will offer a product version of this service as an open source project.

MLxtend

MLxtend is Sebastian Raschka’s collection of tools for machine learning. According to Sebastian, this is a collection of tools and reference implementations that relate to data science and ML. Implementation of algorithms and teaching purposes are reasons behind this machine learning tool. MLxtend are the commonly used machine learning features, with codes that are simpler and cleaner but somewhat similar to Scikit-learn’s API.

DataCleaner

Randal Olson, the mind behind dataCleaner believes this is a Python tool that cleans data sets, making them ready for analysis automatically. DataCleaner will help save time when you encode and clean your data, when it is in a format form that can be handled by Data-Frames. DataCleaner is thus a powerful data profiling engine where you discover or analyze the quality of user’s data with features like HDFC data stores and EBCDIC files.

Deepy

Deepy, though not very popular, is an extended deep learning project that is based on Theano. Its main concept is simplicity, with simple and straightforward documents and examples. It offers a clean but high level interface for components like LSTMs, Auto Encoders and Batch Normalization. Deepy is a high level deep learning API, while its sister project uses generative models of Deep Recurrent Attentive Writer (DRAW).

Have you benefited from these nine machine learning projects to automate machine learning? Do give your comments in the comment section. We would love to hear your feedback!

 

Επόμενο Άρθρο

The Good And Bad Of AI In Personal Finance