There is much cover among these. However, some distinctions can be made. Of necessity, I should over-simplify some things or give short-shrift to others.
Firstly, Artificial Intelligence is genuinely distinct from the rest. Artificial Intelligence is the study of how to make intelligent agents. By and by, it is the manner by which to program a PC to carry on and play out a task as an intelligent agent (say, a person) would. This does not need to include learning or enlistment by any means; it can just be an approach to ‘construct a superior mousetrap.’ For instance, Artificial Intelligence applications have included programs to screen and control continuous processes (e.g., increase aspect An if it seems too low). Notice that Artificial Intelligence can incorporate darn-close anything that a machine does, so long as it doesn’t do it ‘stupidly.’
Practically speaking, in any case, most tasks that require intelligence require a capacity to incite new learning from experiences. Thus, an expansive zone inside AI is machine learning. A PC program is said to learn some task for a fact if its execution at the task improves with involvement, as per some execution measure. Machine learning involves the study of algorithms that can extricate data naturally (i.e., without on-line human direction). It is undoubtedly the case that some of these procedures incorporate ideas got straightforwardly from or inspired by, classical statistics, yet they don’t need to be. Similarly to AI, machine learning is exceptionally expansive and can incorporate almost everything, so long as there is some inductive segment to it. A case of a machine learning calculation may be a Kalman channel.
Data mining is a zone that has taken quite a bit of its inspiration and some of its techniques from machine learning (and also some from statistics), yet are put to various ends. Data mining is usually carried out by a single person, in a specific situation, on a particular data set, in light of a goal. Commonly, this person wants to use the energy of the various pattern recognition techniques that have been produced in machine learning. Frequently, the data set is massive, confused, and additionally, may have unique problems (such as there are a more significant number of variables than observations). Usually, the goal is either to discover/produce some preparatory insights in a territory where there was little information heretofore or to have the capacity to foresee future observations precisely. Also, data mining procedures could be either ‘unsupervised’ (we don’t have the foggiest idea about the answer- – discovery) or ‘supervised’ (we know the answer- – expectation). Note that the goal is for the most part not to build up a more sophisticated understanding of the fundamental data producing process. Normal data mining techniques would incorporate cluster analyses, classification and regression trees, and neural networks.
I suppose I needn’t say much to clarify what statistics is on this site; perhaps, I can say a couple of things. Classical statistics (here It means both frequentist & Bayesian) is a sub-point inside mathematics. I consider it to a great extent the intersection of what we think about probability and what we believe about streamlining. Albeit scientific statistics can be studied as simply a Dispassionate question of request, it is mostly understood as more down to earth and connected in character than other, more tenuous areas of mathematics. As such (and eminently in contrast to data mining above), it is mostly utilized towards better understanding some specific data creating process. Thus, it usually begins with a formally specified model, and from this are inferred procedures to precisely extricate that model from noisy instances (i.e., estimation- – by upgrading some loss work) and to have the capacity to distinguish it from different possibilities (i.e., inferences based on known properties of sampling distributions). The prototypical statistical method is regression.