Computer Science/Artificial Intelligence
-
Natural computationAlgorithms derived from observations of nature phenomenaSimulationsto learn more about these phenomenato learn new ways to solve computational problemsNatural computation and Machine learningWhat is learning?The ability to improve over time, based on experienceWhy? : Solutions to problems are not always programmableExamplesHandwritten character recognitionAdaptive control of p..
8. Learning from natureNatural computationAlgorithms derived from observations of nature phenomenaSimulationsto learn more about these phenomenato learn new ways to solve computational problemsNatural computation and Machine learningWhat is learning?The ability to improve over time, based on experienceWhy? : Solutions to problems are not always programmableExamplesHandwritten character recognitionAdaptive control of p..
2023.10.24 -
Strong vs Weak AIThe Strong AI hypothesis is the philosophical position that a computer program that causes a machine to behave exactly like a human being would also give the machine subjective conscious experience and a mindOn the other hand, Weak AI is the philosophical position that an AI that appears to behave exactly like a human being is only a simulation of human cognitive function, and i..
7. Philosophy, Ethics, and Safety of AIStrong vs Weak AIThe Strong AI hypothesis is the philosophical position that a computer program that causes a machine to behave exactly like a human being would also give the machine subjective conscious experience and a mindOn the other hand, Weak AI is the philosophical position that an AI that appears to behave exactly like a human being is only a simulation of human cognitive function, and i..
2023.10.24 -
DefinitionBayesian networks are networks of random variables→ Each variable is associated with a node in the networkIf we know of the existence of conditional independencies between variables we can simplify the network by removing edgesThis leads to the simplified network💡여기에서 연결된 edge가 없으면 conditional independence를 함축한다.Causal NetworksWhile correlation (association) between variables is an imp..
6. Bayesian NetworksDefinitionBayesian networks are networks of random variables→ Each variable is associated with a node in the networkIf we know of the existence of conditional independencies between variables we can simplify the network by removing edgesThis leads to the simplified network💡여기에서 연결된 edge가 없으면 conditional independence를 함축한다.Causal NetworksWhile correlation (association) between variables is an imp..
2023.10.24 -
Markov ChainsMarkov Chains appear all over computer science, mathematics and AI. Just some of the applicationsBiology : birth death process, disease spreadingBiology : DNA/RNA/Protein sequence analysisSpeech recognitionControl theory, filtering💡LLM(Large Language Model)의 경우 Markov Chain의 아이디어를 많이 차용하였다. 지금은 Deep learning algorithm으로 대체되기는 하였다.Representing Markov ChainMarkov chain과 Markov process..
5. Markov ChainsMarkov ChainsMarkov Chains appear all over computer science, mathematics and AI. Just some of the applicationsBiology : birth death process, disease spreadingBiology : DNA/RNA/Protein sequence analysisSpeech recognitionControl theory, filtering💡LLM(Large Language Model)의 경우 Markov Chain의 아이디어를 많이 차용하였다. 지금은 Deep learning algorithm으로 대체되기는 하였다.Representing Markov ChainMarkov chain과 Markov process..
2023.10.24 -
DistributionWhen we fit a distribution to data, we estimate good values for these parameters from observed data💡즉, observed data를 기반으로 parameter를 추정하는 것이다.Fitting a DistributionAll statistical distributions and models have parameters. The values given to theses parameters determine the exact mathematical function involved.Maximum Likelihood (MLE)The likelihood of the model given observed data is..
4. Statistical Learning BasicsDistributionWhen we fit a distribution to data, we estimate good values for these parameters from observed data💡즉, observed data를 기반으로 parameter를 추정하는 것이다.Fitting a DistributionAll statistical distributions and models have parameters. The values given to theses parameters determine the exact mathematical function involved.Maximum Likelihood (MLE)The likelihood of the model given observed data is..
2023.10.24 -
PlanningDefinitionPlanning is finding a sequence of actions to accomplish a task→ PDDL : Planning Domain Definition LanguagestatesStates are specified by a set of atomic fluents (statement that can be true or false)They don’t include conjunctions, disjunctions, conditionals, or negationUnique name assumptionClosed world assumption : 참임이 알려지지 않은 것들은 거짓으로 간주Domain closure : there are no unnamed ob..
3. Planning and SchedulingPlanningDefinitionPlanning is finding a sequence of actions to accomplish a task→ PDDL : Planning Domain Definition LanguagestatesStates are specified by a set of atomic fluents (statement that can be true or false)They don’t include conjunctions, disjunctions, conditionals, or negationUnique name assumptionClosed world assumption : 참임이 알려지지 않은 것들은 거짓으로 간주Domain closure : there are no unnamed ob..
2023.10.24 -
Shortest Path ProblemsConsider the case where the edges in our search space have costs associated with them, and the goal is to find a path from the origin to the goal that minimizes the cumulative costCumulative Best-First SearchWe have a frontier set ordered by the cost of its members, initially emptyPlace the origin in the frontier set, appending an empty path vector and a cost value of 0Repe..
2. Heuristics and Competitive SearchShortest Path ProblemsConsider the case where the edges in our search space have costs associated with them, and the goal is to find a path from the origin to the goal that minimizes the cumulative costCumulative Best-First SearchWe have a frontier set ordered by the cost of its members, initially emptyPlace the origin in the frontier set, appending an empty path vector and a cost value of 0Repe..
2023.10.23 -
Search ProblemsProblemSearch Scenario 1Graph SearchSearch Algorithms : BFS & DFSIterative Deepening (ID)Bidirectional SearchPath informationExampleSearch Scenario 2Best First SearchLocal SearchProcedureGreedy Hill ClimbGreedy Hill Climb with Random RestartsSimulated AnnealingLocal Beam SearchStochastic Local Beam SearchSearch Scenario 3Dynamic Programming for Path FindingSearch ProblemsA search ..
1. Search BasicsSearch ProblemsProblemSearch Scenario 1Graph SearchSearch Algorithms : BFS & DFSIterative Deepening (ID)Bidirectional SearchPath informationExampleSearch Scenario 2Best First SearchLocal SearchProcedureGreedy Hill ClimbGreedy Hill Climb with Random RestartsSimulated AnnealingLocal Beam SearchStochastic Local Beam SearchSearch Scenario 3Dynamic Programming for Path FindingSearch ProblemsA search ..
2023.10.23