Computing modules (Вычислительные модули) are plug-in specialized computers designed to solve narrowly focused tasks, such as accelerating the work of artificial neural networks algorithms, computer vision, voice recognition, machine learning and other artificial intelligence methods, built on the basis of a neural processor a specialized class of microprocessors and coprocessors (processor, memory, data transfer).
Computing system (Вычислительная система) is a software and hardware complex intended for solving problems and processing data (including calculations) or several interconnected complexes that form a single infrastructure.
Computing units (Вычислительные блоки) are blocks that work like a filter that transforms packets according to certain rules. The instruction set of the calculator can be limited, which guarantees a simple internal structure and a sufficiently high speed of operation.
Concept drift (Дрейф концепций) In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes [].
Confidentiality of information (Конфиденциальность информации) a mandatory requirement for a person who has access to certain information not to transfer such information to third parties without the consent of its owner.
Confirmation Bias (Предвзятость подтверждения) the tendency to search for, interpret, favor, and recall information in a way that confirms ones own beliefs or hypotheses while giving disproportionately less attention to information that contradicts it.
Confusion matrix (Матрица неточностей) is a situational analysis table that summarizes the prediction results of a classification model in machine learning. The records in the dataset are summarized in a matrix according to the real category and the classification score made by the classification model.
Connectionism (Коннекционизм) An approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks.
Consistent heuristic (Последовательная (непротиворечивая) эвристика) In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor
Constrained conditional model (CCM) (Условная модель с ограничениями) A machine learning and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative сonstraints [].
Constraint logic programming (Логическое программирование ограничений) A form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. [121].
Constraint programming (Ограниченное программирование) A programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found [122].
Constructed language (Also conlang) (Искусственные языки) A language whose phonology, grammar, and vocabulary are consciously devised, instead of having developed naturally. Constructed languages may also be referred to as artificial, planned, or invented languages.
Consumer artificial intelligence (Бытовой искусственный интеллект) is specialized artificial intelligence programs embedded in consumer devices and processes.
Continuous feature (Непрерывная функция) A floating-point feature with an infinite range of possible values. Contrast with discrete feature.
Contributor (Сотрудник) A human worker providing annotations on the Appen data annotation platform.
Control theory (Теория управления) In control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability [123].
Convenience sampling (Удобная выборка) Using a dataset not gathered scientifically in order to run quick experiments. Later on, its essential to switch to a scientifically gathered dataset.
Convergence (Конвергенция) Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations. In other words, a model reaches convergence when additional training on the current data will not improve the model. In deep learning, loss values sometimes stay constant or nearly so for many iterations before finally descending, temporarily producing a false sense of convergence. See also early stopping.
Convex function (Выпуклая функция) is a function where the area above the graph of the function is a convex set. The prototype of a convex function is U-shaped. A strictly convex function has exactly one local minimum point. Classical U-shaped functions are strictly convex functions. However, some convex functions (such as straight lines) do not have a U-shape. Many common loss functions are convex: L2 loss, Log Loss, L1 regularization, L2 regularization. Many variants of gradient descent are guaranteed to find a point close to the minimum of a strictly convex function. Similarly, many variants of stochastic gradient descent have a high probability (though not a guarantee) of finding a point close to the minimum of a strictly convex function. The sum of two convex functions (e.g., L2 loss + L1 regularization) is a convex function. Deep models are never convex functions. Notably, algorithms designed for convex optimization tend to find reasonably good solutions in deep networks anyway, even if those solutions do not guarantee a global minimum.
Convex optimization (Выпуклая оптимизация) The process of using mathematical techniques such as gradient descent to find the minimum of a convex function. A great deal of research in machine learning has focused on formulating various problems as convex optimization problems and in solving those problems more efficiently. For complete details, see Boyd and Vandenberghe, Convex Optimization [124].
Convex set (Выпуклое множество) A subset of Euclidean space such that a line drawn between any two points in the subset remains completely within the subset. For instance, the following two shapes are convex sets.
Convex set (Выпуклое множество) A subset of Euclidean space such that a line drawn between any two points in the subset remains completely within the subset. For instance, the following two shapes are convex sets.
Convolution (Свертка) The process of filtering. A filter (or equivalently: a kernel or a template) is shifted over an input image. The pixels of the output image are the summed product of the values in the filter pixels and the corresponding values in the underlying image [125]
Convolutional filter (Сверточный фильтр) One of the two actors in a convolutional operation. (The other actor is a slice of an input matrix.) A convolutional filter is a matrix having the same rank as the input matrix, but a smaller shape.
Convolutional layer (Сверточный слой) A layer of a deep neural network in which a convolutional filter passes along an input matrix.
Convolutional neural network (CNN) (Convolutional neural network (CNN) is a type of neural network that identifies and interprets images
Convolutional neural network (Сверточная нейронная сеть) In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. Сonvolutional neural network is a class of artificial neural network most commonly used to analyze visual images. They are also known as Invariant or Spatial Invariant Artificial Neural Networks (SIANN) based on an architecture with a common weight of convolution kernels or filters that slide over input features and provide equivalent translation responses known as feature maps.
Convolutional operation (Сверточная операция) The following two-step mathematical operation: Element-wise multiplication of the convolutional filter and a slice of an input matrix. (The slice of the input matrix has the same rank and size as the convolutional filter.) Summation of all the values in the resulting product matrix.
Corelet programming environment (Corelet programming environment) is a scalable environment that allows programmers to set the functional behavior of a neural network by adjusting its parameters and communication characteristics.
Corpus (Корпус) A large dataset of written or spoken material that can be used to train a machine to perform linguistic tasks.
Correlation (Корреляция) is a statistical relationship between two or more random variables.
Correlation analysis (Корреляционный анализ) is a statistical data processing method that measures the strength of the relationship between two or more variables. Thus, it determines whether there is a connection between the phenomena and how strong the connection between these phenomena is.
Cost (Cost) synonym for loss. A measure of how far a models predictions are from its label. Or, to put it more pessimistically, a measure of how bad a model is. To determine this value, the model must define a loss function. For example, linear regression models typically use the standard error for the loss function, while logistic regression models use the log loss.
Co-training (Совместное обучение)
Co-training essentially amplifies independent signals into a stronger signal. For instance, consider a classification model that categorizes individual used cars as either Good or Bad. One set of predictive features might focus on aggregate characteristics such as the year, make, and model of the car; another set of predictive features might focus on the previous owners driving record and the cars maintenance history. The seminal paper on co-training is Combining Labeled and Unlabeled Data with Co-Training by Blum and Mitchell [126].
Counterfactual fairness (Контрфактическая справедливость) A fairness metric that checks whether a classifier produces the same result for one individual as it does for another individual who is identical to the first, except with respect to one or more sensitive attributes. Evaluating a classifier for counterfactual fairness is one method for surfacing potential sources of bias in a model. See When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness for a more detailed discussion of counterfactual fairness.
Coverage bias (Coverage bias) this bias means that the study sample is not representative and that the data set in the array has zero chance of being included in the sample.
Crash blossom (Crash blossom A sentence or phrase with an ambiguous meaning. Crash blossoms present a significant problem in natural language understanding. For example, the headline Red Tape Holds Up Skyscraper is a crash blossom because an NLU model could interpret the headline literally or figuratively.
Critic (Критик) Synonym for Deep Q-Network.
Critical information infrastructure (Критическая информационная инфраструктура) objects of critical information infrastructure, as well as telecommunication networks used to organize the interaction of such objects.
Critical information infrastructure of the Russian Federation (Критическая информационная инфраструктура Российской Федерации) a set of critical information infrastructure objects, as well as telecommunication networks used to organize the interaction of critical information infrastructure objects with each other.
Cross-entropy (Кросс-энтропия) A generalization of Log Loss to multi-class classification problems. Cross-entropy quantifies the difference between two probability distributions. See also perplexity.
Crossover (Also recombination) (Кроссовер) In genetic algorithms and evolutionary computation, a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biological organisms. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population [127].
Cross-Validation (k-fold Cross-Validation, Leave-p-out Cross-Validation) (ПерекрёстнаяCross-Validation (k-fold Cross-Validation, Leave-p-out Cross-Validation) ( A collection of processes designed to evaluate how the results of a predictive model will generalize to new data sets. k-fold Cross-Validation; Leave-p-out Cross-Validation.
Cryogenic freezing (cryonics, human cryopreservation) is a technology of preserving in a state of deep cooling (using liquid nitrogen) the head or body of a person after his death with the intention to revive them in the future.