Home:Tecnologias:Artificial Intelligence


Typical pattern recognition tasks in the field of machine learning are divided into three distinct stages: pre-processing of original data, feature extraction, and classification. The first objective of pre-processing is the removal of information that is not useful for the classification of an object and the normalisation of the data representing it, so that all the objects under study are comparable to each other.

In the next stage, a feature extraction technique is implemented to reduce the dimensionality of the original vectors but keeping as much discriminatory information as possible. This can be achieved by means of different techniques, with PCA, LDA and autoencoders being some of the best-known ones. Finally, some classification process is used, such as the technique known as “direct voting scheme”.

For this purpose, all the characteristic vectors extracted from a “test” object are classified by means of the k-Neighbour rule. The winning class of each classified vector increases the overall “vote” count for that class, so that the class that ultimately accumulates the most votes after voting for all vectors belonging to the same “test” image is the one provided as a result of the classification of the current object.

It is important not to lose sight of the fact that Machine Learning is a field within the framework of Artificial Intelligence, but with influences and contributions from other areas such as Theoretical Computing, Decision Theory and General Statistics. To face tasks that require simulating behaviours that would be interpreted as ‘smart’ in a person, it is necessary to apply techniques that generally require learning from real world data. This information is not usually structured as in other areas of Computer Science, but is presented as examples. These examples contain physical measurements, variables obtained from different data sources, interactions with users, etc. The final goal is to build an Inference model.

The construction of a model of an observable process can be undertaken by following two completely different procedures depending on our interests. On the one hand, if it is necessary (and possible) to get to know the internal functioning of the phenomenon, it will be convenient to obtain a Mechanistic Model of it. On the other hand, if it is not reasonably possible to understand the intimate mechanisms of the process to be modelled, we must resort to an Empirical Model that, from a more or less wide set of observations, gives us the ability to predict the development of the process in a certain range of conditions that, generally, do not have to be experienced during the building of the model.

It is important not to confuse the concept of a mechanistic statistical model with that of a deductive or knowledge-based model. While the latter is completely specified by the person who designs it—who must be able, at least in principle, to solve the problem by manual means or through the advice of an expert—, in a mechanistic model only the parametric behaviour of the phenomenon is deductively defined and the values of the parameters must be estimated on the basis of a set of observations. Mechanistic models are typically formulated in terms of non-linear equations with the parameters. Their advantages over empirical models are: a) they contribute more effectively to the development of scientific knowledge; b) their reliability in extrapolations is much greater; and c) they are much more effective in terms of representational economy.



Typical problems in Machine Learning do not generally belong to the set of phenomena that can be modelled by a mechanistic, much less knowledge-based, approach. They are usually very complex processes that require using fundamentally empirical (inductive) techniques.

Among the inductive models of Machine Learning, we can establish, depending on what type of information is supplied to the system at the time of training:

  • Supervised learning: The algorithm produces a function that establishes a correspondence between the inputs and the desired outputs of the system. An example is classification tasks, in which the system has to label (classify) a series of vectors using one of several categories (classes) based on examples of previous tagging.
  • Unsupervised learning: The entire modelling process is carried out on a set of examples consisting of system entries only. There is no information about the categories of these examples. Therefore, in this case, the system has to be able to recognise patterns in order to tag the new entries.
  • Semi-supervised learning: This type of algorithm combines the two previous algorithms in order to classify properly. Marked and unmarked data are taken into account.
  • Reinforcement learning: The algorithm learns according to the responses to its actions. Therefore, the system learns by trial and error.

The application of the previously detailed techniques to the resolution of Artificial Intelligence problems requires iterative approaches that allow to better refine the models built in each one of them through the progressive analysis of the partial results, and the appropriate feedback from this information to improve the preliminary models.

The technological advances in the area of mass data computing have made it possible to start exploiting and applying to real tasks the techniques and solutions that have been researched for several decades in the fields of shape recognition, machine vision, machine learning, natural language recognition and related areas.

Big Data and cloud computing technologies support the development of automatic processes on which many of the techniques evolved within the field of Artificial Intelligence are based.


In addition to speech recognition and machine translation tasks, a multitude of tasks of diverse origin have been addressed currently, but following an AI-based approach. For example:

  • Bank fraud detection.
  • Biometric recognition (fingerprint, iris, face, etc.)
  • Complex image analysis for multiple object recognition
  • Data mining.
  • Many others..

In the near future, the increase in the amount of data we generate daily and the needs to process such data to obtain useful information or to carry out specific actions will continue to be the source of new challenges in the application of various techniques researched and developed in the field of AI.



The application of AI techniques, more specifically Machine Learning (ML), and the existing computer capacities to the health field allow to generate support systems for doctors, which facilitate their work and the improve the quality of the services provided, resulting in a higher quality of life for patients. Thus, the aim is to apply techniques that have proven their validity and effectiveness in other areas, such as biometrics, handwritten text recognition and automatic translation.

The trend in the analysis of health data is to have few observations (tens, hundreds) compared to a significant number of variables (hundreds, thousands and more). This situation is a real challenge, for the following reasons:

  • Inferring the decision boundaries is a real challenge in high dimensional areas, since they tend to be very dispersed and require many observations in order to draw reliable conclusions.
  • The problem being monitored is possibly incomplete: that is, it is not known to what extent we have patients who have the disease because of the variables included in the analysis or for another reason.



    Technology aimed at the classification of parts, detection of manufacturing faults and metrology for quality control in online manufacturing processes. Research in the field of machine learning has been key in developing the basis of this technology. For example, the classification algorithms based on the nearest neighbour (supervised geometric classifier), as well as machine learning of a reference model by composing several acquisitions on the same correct real part, or even the process of alignment the reference part and the part under inspection, based on techniques for minimising point-to-point distances through an iterative algorithm.


    These are tasks that require research into machine learning techniques, computer vision and their combination, in order to be applied in multiple and very diverse fields (transport, security, biometrics, finance, optimisation, etc.) where combining them gives rise to promising results in automation and improvement of their processes. In addition, we must also highlight the use of pattern recognition techniques in both machine learning and computer vision for object detection, environment recognition and scene analysis to support decision making.


    Get in touch with us through the form for companies and we will guide you to incorporate these technologies into your project through the partners specialized in your activity.

    Selecciona idioma: