Posts

Showing posts from January, 2021

Policy based design in programming.

Policy-based design, also known as policy-based class design or policy-based programming, is the term used in Modern C++ Design for a design approach based on an idiom for C++ known as policies. It has been described as a compile-time variant of the strategy pattern, and has connections with C++ template metaprogramming. It was first popularized in C++ by Andrei Alexandrescu with Modern C++ Design and with his column Generic<Programming> in the C/C++ Users Journal, and it is currently closely associated with C++ and D as it requires a compiler with highly robust support for templates, which was not common before about 2003. Previous examples of this design approach, based on parameterized generic code, include parametric modules (functors) of the ML languages,and C++ allocators for memory management policy. The central idiom in policy-based design is a class template (called the host class), taking several type parameters as input, which are instantiated with types selected by th

Template meta programming.

Template metaprogramming (TMP) is a metaprogramming technique in which templates are used by a compiler to generate temporary source code, which is merged by the compiler with the rest of the source code and then compiled. The output of these templates include compile-time constants, data structures, and complete functions. The use of templates can be thought of as compile-time polymorphism. The technique is used by a number of languages, the best-known being C++, but also Curl, D, and XL. Template metaprogramming was, in a sense, discovered accidentally. Some other languages support similar, if not more powerful, compile-time facilities (such as Lisp macros), but those are outside the scope of this article.

Homeiconicity in programming.

In computer programming, homoiconicity (from the Greek words homo- meaning "the same" and icon meaning "representation") is a property of some programming languages. A language is homoiconic if a program written in it can be manipulated as data using the language, and thus the program's internal representation can be inferred just by reading the program itself. For example, a Lisp program is written as a regular Lisp list, and can be manipulated by other Lisp code. This property is often summarized by saying that the language treats "code as data". In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself. This makes metaprogramming easier than in a language without this property: reflection in the language (examining the program's entities at runtime) depends on a single, homogeneous structure, and it does not have to handle several different structures that would appear in

Attribute oriented programming.

Attribute-oriented programming (@OP) is a program-level marking technique. Programmers can mark program elements (e.g. classes and methods) with attributes to indicate that they maintain application-specific or domain-specific semantics. For example, some programmers may define a "logging" attribute and associate it with a method to indicate the method should implement a logging function, while other programmers may define a "web service" attribute and associate it with a class to indicate the class should be implemented as a web service. Attributes separate application's core logic (or business logic) from application-specific or domain-specific semantics (e.g. logging and web service functions). By hiding the implementation details of those semantics from program code, attributes increase the level of programming abstraction and reduce programming complexity, resulting in simpler and more readable programs. The program elements associated with attributes are t

Reflection use in programming .

Reflection helps programmers make generic software libraries to display data, process different formats of data, perform serialization or deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication. Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations. Reflection makes a language more suited to network-oriented code. For example, it assists languages such as Java to operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection (e.g. C) have to use auxiliary compilers, e.g. for Abstract Syntax Notation, to produce code for serialization and bundling. Reflection can be used for observing and modifying program execution at runtime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a d

Automatic programming.

In computer science, the term automatic programming identifies a type of computer programming in which some mechanism generates a computer program to allow human programmers to write the code at a higher abstraction level. There has been little agreement on the precise definition of automatic programming, mostly because its meaning has changed over time. David Parnas, tracing the history of "automatic programming" in published research, noted that in the 1940s it described automation of the manual process of punching paper tape. Later it referred to translation of high-level programming languages like Fortran and ALGOL. In fact, one of the earliest programs identifiable as a compiler was called Autocode. Parnas concluded that "automatic programming has always been a euphemism for programming in a higher-level language than was then available to the programmer." Program synthesis is one type of automatic programming where a procedure is created from scratch, based on

Meta programming.

Metaprogramming is a programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while run the software.In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time. It also allows programs greater flexibility to efficiently handle new situations without recompilation. Metaprogramming can be used to move computations from run-time to compile-time, to generate code using compile time computations, and to enable self-modifying code. The language in which the metaprogram is written is called the metalanguage. The language of the programs that are manipulated is called the attribute-oriented programming language. The ability of a programming language to be its own metalanguage is called reflection or "reflexivity". Reflection is a valuable langu

End user development.

End-user development (EUD) or end-user programming (EUP) refers to activities and tools that allow end-users – people who are not professional software developers – to program computers. People who are not professional developers can use EUD tools to create or modify software artifacts (descriptions of automated behavior) and complex data objects without significant knowledge of a programming language. In 2005 it was estimated (using statistics from the U.S. Bureau of Labor Statistics) that by 2012 there would be more than 55 million end-user developers in the United States, compared with fewer than 3 million professional programmers. Various EUD approaches exist, and it is an active research topic within the field of computer science and human-computer interaction. Examples include natural language programming, spreadsheets,scripting languages (particularly in an office suite or art application), visual programming, trigger-action programming and programming by example. The most popul

Array programming definition.

In computer science, array programming refers to solutions which allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings. Modern programming languages that support array programming (also known as vector or multidimensional languages) have been engineered specifically to generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays. These include APL, J, Fortran 90, Mata, MATLAB, Analytica, TK Solver (as lists), Octave, R, Cilk Plus, Julia, Perl Data Language (PDL), Wolfram Language, and the NumPy extension to Python. In these languages, an operation that operates on entire arrays can be called a vectorized operation, regardless of whether it is executed on a vector processor (which implements vector instructions) or not. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain

What is computer programming.

Computer programming – process that leads from an original formulation of a computing problem to executable computer programs. Programming involves activities such as analysis, developing understanding, generating algorithms, verification of requirements of algorithms including their correctness and resources consumption, and implementation (commonly referred to as coding) of algorithms in a target programming language. Source code is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem

Physics.

Physics  is the natural science that studies matter,[a] its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, and its main goal is to understand how the universe behaves. Various examples of physical phenomena Physics is one of the oldest academic disciplines and, through its inclusion of astronomy, perhaps the oldest.Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right.[c] Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in academic d

Chemistry.

Chemistry is the scientific discipline involved with elements and compounds composed of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during a reaction with other substances. An oil painting of a chemist (by Henrika Å antel in 1932) In the scope of its subject, chemistry occupies an intermediate position between physics and biology.It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant chemistry (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Chemistry addresses topics such as how atoms and molecules interact via chemical bonds

Super intelligence.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity. University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whe

Strong artificial intelligence.

Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,full AI, or general intelligent action. Some academic sources reserve the term "strong AI" for computer programs that can experience sentience, self-awareness and consciousness.Today's AI is speculated to be decades away from AGI. In contrast to strong AI, weak AI(also called narrow AI) is not intended to perform human-like cognitive abilities and personality, rather, weak AI is limited to the use of software to study or accomplish specific pre-learned problem solving or reasoning tasks (expert systems).

Weak AI .

Weak artificial intelligence (weak AI), is artificial intelligence that implements a limited part of mind, or as narrow AI,is focused on one narrow task. In John Searle's terms it “would be useful for testing hypothesis about minds, but would not actually be minds”.Contrast with strong AI which is defined as a machine with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind.

Numerical Analysis.

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. The growth in computing power has revolutionized the use of realistic mathematical models in science and engineering, and subtle numerical analysis is required to implement these detailed models of the world. For example, ordinary differential equations appear in celestial mechanics (predicting the motions of planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Computational Science.

In mathematics, logic, and computer science, a type system is a formal system in which every term has a "type" which defines its meaning and the operations that may be performed on it. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two well-known such theories are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory. Type theory was created to avoid paradoxes in previous foundations such as naive set theory, formal logics and rewrite systems. Type theory is closely related to, and in some cases overlaps with, computational type systems, which are a programming language feature used to reduce bugs.

Formal semantics.

In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation. Formal semantics, for instance, helps to write compilers, better understand what a program is doing, and to prove, e.g., that the following if statement if 1 == 1 then S1 else S2 has the same effect as S1 alone.

Programming language theory ?

Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and of their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It has become a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications.

Programming language pragmatics.

A programming language is a formal language comprising a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos. The programs for these machines (such as a player piano's scrolls) did not produce different behavior in response to different inputs or conditions. Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). The descript

Compiler Theory.

A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets. In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once. A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that fo

Data mining.

Data mining is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating

What is structured storage.

Structured storage is computer storage for structured data, often in the form of a distributed database. Computer software formally known as structured storage systems include Apache Cassandra, Google's Bigtable and Apache HBase.

Relational Database.

A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A software system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems have an option of using the SQL (Structured Query Language) for querying and maintaining the database.

Distributed Computing .

Distributed computing is a field of computer science that studies distributed systems. A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another.[1] The components interact with one another in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications. A computer program that runs within a distributed system is called a distributed program (and distributed programming is the process of writing such programs). There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues. Distributed computing also refers to the use of distributed systems t

Concurrency of computer science.

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units.

Parallel Computing.

Parallel computing is a type of computation where many calculations or the execution of processes are carried out simultaneously.Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Information visualization.

Information visualization or information visualisation is the study of (interactive) visual representations of abstract data to reinforce human cognition. The abstract data include both numerical and non-numerical data, such as text and geographic information. The naming of subfields is sometimes confusing. One accepted definition is that it's information visualization when the spatial representation is chosen, whereas it's scientific visualization when the spatial representation is given.

Image processing.

Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.

Computer graphics definition.

Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

Operating system?

An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers. The dominant desktop operating system is Microsoft Windows with a market share of around 76.45%. macOS by Apple Inc. is in second place (17.72%), and the varieties of Linux are collectivel

Computer Architecture.

In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation.In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.

Computer Security.

Computer security, cybersecurity or information technology security (IT security) is the protection of computer systems and networks from the theft of or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide. The field is becoming more significant due to the increased reliance on computer systems, the Internet and wireless network standards such as Bluetooth and Wi-Fi, and due to the growth of "smart" devices, including smartphones, televisions, and the various devices that constitute the "Internet of things". Owing to its complexity, both in terms of politics and technology, cybersecurity is also one of the major challenges in the contemporary world.

Networking.

A computer network is a group of computers that use a set of common communication protocols over digital interconnections for the purpose of sharing resources located on or provided by the network nodes. The interconnections between nodes are formed from a broad spectrum of telecommunication network technologies, based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies. The nodes of a computer network may include personal computers, servers, networking hardware, or other specialised or general-purpose hosts. They are identified by hostnames and network addresses. Hostnames serve as memorable labels for the nodes, rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.

Robotics.

Robotics is an interdisciplinary field that integrates computer science and engineering.Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics, electronics, bioengineering, computer engineering, control engineering, software engineering, among others.

Natural Language processing.

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The result is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

Evolutionary Computing.

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm.

Machine Learning.

Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks

Defination of Computer Vision.

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decision.Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory

Automated Reasoning.

Automated reasoning is an area of computer science (involves knowledge representation and reasoning) and metalogic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

What is Data structure.

In computer science, a data structure is a data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data

What is algorithm.

In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.Algorithms are always unambiguous and are used as specifications for performing calculations, data processing, automated reasoning, and other tasks.

What is AI.

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence). Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"

Cryptography is define computing.

Cryptography, or cryptology  is the practice and study of techniques for secure communication in the presence of third parties called adversaries.More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages;various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.

Number Theory.

Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics."Number theorists study prime numbers as well as the properties of mathematical objects made out of integers (for example, rational numbers) or defined as generalizations of the integers (for example, algebraic integers).

Boolean Algebra.

In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0, respectively.Instead of elementary algebra, where the values of the variables are numbers and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction (and) denoted as ∧, the disjunction (or) denoted as ∨, and the negation (not) denoted as ¬. It is thus a formalism for describing logical operations, in the same way that elementary algebra describes numerical operations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term "Boolean algebra" was first suggested by Sheffer in 1913,although Charles Sanders Peirce gave the title "A Boolean Algebra with One Constant" to the f

Mathematical Logic.

Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order logic, and definability. In computer science (particularly in the ACM Classification) mathematical logic encompasses additional topics not detailed in this article; see Logic in computer science for those. Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arith

What is graph Theory.

In mathematics, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically; see Graph (discrete mathematics) for more detailed definitions and for other variations in the types of graph that are commonly considered. Graphs are one of the prime objects of study in discrete mathematics.

Discrete mathematics to learn computer science.

Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic– do not vary smoothly in this way, but have distinct, separated values.Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus or Euclidean geometry. Discrete objects can often be enumerated by integers. More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics." Indeed, discrete mathematics is described less by what is included than by what is excluded: continuously varying quantities and related notions.

Define cybernatics.

Cybernetics is a transdisciplinary approach for exploring regulatory and purposive systems—their structures, constraints, and possibilities. The core concept of the discipline is circular causality or feedback—that is, where the outcomes of actions are taken as inputs for further action. Cybernetics is concerned with such processes however they are embodied,including in environmental, technological, biological, cognitive, and social systems, and in the context of practical activities such as designing, learning, managing, and conversation.

Description of Game theory.

Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. In the 21st century, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

What is Coding Theory brief description.

Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.

Describe a Any Logic.

AnyLogic is a multimethod simulation modeling tool developed . It supports agent-based, discrete event, and system dynamics simulation methodologies. AnyLogic is a cross-platform simulation software because it works on Windows, macOS and Linux. AnyLogic is used to simulate: markets and competition,healthcare,manufacturing,supply chains and logistics,retail,business processes,social and ecosystem dynamics,defense, project and asset management,pedestrian dynamics and road traffic, IT,aerospace. History of AnyLogic In the beginning of the 1990s there was a big interest in the mathematical approach to modeling and simulation of parallel processes. This approach may be applied to the analysis of correctness of parallel and distributed programs. The Distributed Computer Network (DCN) research group at Saint Petersburg Polytechnic University developed such a software system for the analysis of program correctness; the new tool was named COVERS (Concurrent Verification and Simulation). This

What is adaptive modeler?

Altreva Adaptive Modeler is a software application for creating agent-based financial market simulation models for the purpose of forecasting prices of real world market traded stocks or other securities. The technology it uses is based on the theory of agent-based computational economics (ACE), the computational study of economic processes modeled as dynamic systems of interacting heterogeneous agents. Altreva's Adaptive Modeler and other agent-based models are used to simulate financial markets to capture the complex dynamics of a large diversity of investors and traders with different strategies, different trading time frames, and different investment goals. Agent-based models based on heterogeneous and boundedly rational (learning) agents have shown to be able to explain the empirical features of financial markets better than traditional financial models that are based on representative rational agents. Technology The software creates an agent-based model for a particular stock

TENSOR PROCESSING UNIT

Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning, particularly using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale. The tensor processing unit was announced in May 2016 at Google I/O, when the company said that the TPU had already been used inside their data centers for over a year. The chip has been specifically designed for Google's TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks. However, as of 2017 Google still used CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets. Google's TPUs are proprietary.

Chaff algorithm introduction.

Chaff is an algorithm for solving instances of the Boolean satisfiability problem in programming. It was designed by researchers at Princeton University, United States. The algorithm is an instance of the DPLL algorithm with a number of enhancements for efficient implementation. Implementations Some available implementations of the algorithm in software are mChaff and zChaff, the latter one being the most widely known and used. zChaff was originally written by Dr. Lintao Zhang, now[clarify] at Microsoft Research, hence the “z”. It is now maintained by researchers at Princeton University and available for download as both source code and binaries on Linux. zChaff is free for non-commercial use.

A statistical relationship can change when the economic environment changes because of effects through expectations.

Expectations play an important role in the economic theories that underpin most macroeconomic models. Planning for the future is a central part of economic life. The need to make decisions about the type of car to buy, the amount of education to pursue, and the  fraction  of income to save forces households to think  about  which choices make the most sense not just for today but for years into the future. Similarly, business firms, in deciding where to locate factories and offices, what equipment to install, and what products to develop and produce, make decisions with conse- quences that may last many years. Individuals must make informed guesses about circumstances in the years ahead and then base decisions on these expec- tations. The approach to expectations taken in the FRB/US model is best understood in the context of a debate that has engaged macroeconomists for the past twenty-five years.

What is Engineering?

Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering.

History of economics

Economic writings date from earlier Mesopotamian, Greek, Roman, Indian subcontinent, Chinese, Persian, and Arab civilizations. Economic precepts occur throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod himself as the "first economist". Other notable writers from Antiquity through to the Renaissance include Aristotle, Xenophon, Chanakya (also known as Kautilya), Qin Shi Huang, Thomas Aquinas, and Ibn Khaldun. Joseph Schumpeter described Aquinas as "coming nearer than any other group to being the "founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective. A seaport with a ship arriving A 1638 painting of a French seaport during the heyday of mercantilism. Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of

Multiple aspects of economic science

The discipline was renamed in the late 19th century, primarily due to Alfred Marshall, from "political economy" to "economics" as a shorter term for "economic science". At that time, it became more open to rigorous thinking and made increased use of mathematics, which helped support efforts to have it accepted as a science and as a separate discipline outside of political science and other social sciences. There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations.

Nvidia software

Nvidia DGX is a line of Nvidia produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. DGX-1 DGX-1 servers feature 8 GPUs based on the Pascal or Volta daughter cards with HBM 2 memory, connected by an NVLink mesh network. The product line is intended to bridge the gap between GPUs and AI accelerators in that the device has specific features specializing it for deep learning workloads.The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing,while the Volta-based upgrade increased this to 960 teraflops. DGX-2 The successor of the Nvidia DGX-1 is the Nvidia DGX-2, which uses 16 32GB V100 (second generation) cards in a single unit. This increases performance of up to 2 Petaflops with 512GB of shared memory for tackling larger problems and uses NVSwitch to speed up internal communication. Additionally, there is a higher performance version of the DGX-2, the DGX-2H with a notable difference being the replacement o

Defination of neuromorphic Engineering

Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead,in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts

History of AI acceleration .

Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

What is AI acceleration.

 An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence applications, especially artificial neural networks, machine vision and machine learning. Typical applications include algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks.They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability.As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors.A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

Define a emotion markup languge

 An Emotion Markup Language( EML) has first been defined by the W3C Emotion Incubator Group as a general-purpose emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented. Emotion-oriented computing (or "affective computing") is gaining importance as interactive technological systems become more sophisticated. Representing the emotional states of a user or the emotional states to be simulated by a user interface requires a suitable representation format; in this case a markup language is used.

What is artificial empathy

Artificial empathy (AE) or computational empathy is the development of AI systems − such as companion robot or virtual agents − that are able to detect and respond to human emotions in an empathic way.According to scientists, although the technology can be perceived as scary or threatening by many people, it could also have a significant advantage over humans in professions which are traditionally involved in emotional role-playing such as the health care sector.From the care-giver perspective for instance, performing emotional labor above and beyond the requirements of paid labor often results in chronic stress or burnout, and the development of a feeling of being desensitized to patients. However, it is argued that the emotional role-playing between the care-receiver and a robot can actually have a more positive outcome in terms of creating the conditions of less fear and concern for one's own predicament best exemplified by the phrase: "if it is just a robot taking care of

Affectiva defination

  Affectiva  is a software company that builds artificial intelligence  that understands human emotions, cognitive states, activities and the objects people use, by analyzing facial and vocal expressions.  The company spun out of    MIT media lab   and created the new technology category of Artificial Emotional Intelligence (Emotion AI)

Affective Computing

 Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science.While some core ideas in the field may be traced as far back as to early philosophical inquiries into emotion,the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing and her book Affective Computingpublished by MIT Press.One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.

Analytical Engine

 The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage, with the assistance of Ada Lovelace. It was first described in 1837 as the successor to Babbage's difference engine, which was a design. The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the logical structure of the Analytical Engine was essentially the same as that which has dominated computer design in the electronic era.The Analytical Engine is one of the most successful achievements of Charles Babbage. Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that the first general-purpose computer, Z3, was buil

Macroeconomics

 Macroeconomists study topics such as GDP, unemployment rates, national income, price indices, output, consumption, unemployment, inflation, saving, investment, energy, international trade, and international finance. Macroeconomics and microeconomics are the two most general fields in economics. The United Nations Sustainable Development Goal 17 has a target to enhance global macroeconomic stability through policy coordination and coherence as part of the 2030 Agenda.

Alternative in Decision theory

  Alternatives A highly controversial issue is whether one can replace the use of probability in decision theory by other alternatives. Probability theory Advocates for the use of probability theory point to: the work of Richard thelked cox for justification of the probability axioms, the Dutch book paradoxes of Brunio de fenetti as illustrative of the theoretical difficulties that can arise from departures from the probability axioms, and the complete class theorems, which show that all admisible decision rules are equivalent to the Bayesian decision rule for some utility function and some prior distribution (or for the limit of a sequence of prior distributions). Thus, for every decision rule, either the rule may be reformulated as a Bayesien procedure (or a limit of a sequence of such), or there is a rule that is sometimes better and never worse. Alternatives to probability theory The proponents of fuzzy logic, possibility theory, quantum cognitation, Dempster-shafer theory, and inf

Heuristics

  Heuristics Heuristic in decision-making is the ability of making decisions based on unjustified or routine thinking. While quicker than step-by-step processing, heuristic thinking is also more likely to involve fallacies or inaccuracies.The main use for heuristics in our daily routines is to decrease the amount of evaluative thinking we perform when making simple decisions, making them instead based on unconscious rules and focusing on some aspects of the decision, while ignoring others.One example of a common and erroneous thought process that arises through heuristic thinking is the Gambler -fallcy — believing that an isolated random event is affected by previous isolated random events. For example, if a coin is flipped to tails for a couple of turns, it still has the same probability of doing so; however it seems more likely, intuitively, for it to roll heads soon This happens because, due to routine thinking, one disregards the probability and concentrates on the ratio of the out