LogoAIAny
  • Search
  • Collection
  • Category
  • Tag
  • Blog
LogoAIAny

Learn Anything about AI in one site

Best learning resources for AI

LogoAIAny

Learn Anything about AI in one site.

support@aiany.app
Product
  • Search
  • Collection
  • Category
  • Tag
Resources
  • Blog
Company
  • Privacy Policy
  • Terms of Service
  • Sitemap
Copyright © 2025 All Rights Reserved.
Icon for item

Roo Code

2025
Roo Code, Inc.

Roo Code puts an entire AI dev team right in your editor, outpacing closed tools with deep project-wide context, multi-step agentic coding, and unmatched developer-centric flexibility.

ai-toolsai-codingvibe-codingplugin
Icon for item

LightGBM

2016
Microsoft

LightGBM is an open-source gradient-boosting framework that delivers fast, memory-efficient tree-based learning for classification, regression and ranking tasks.

ai-developmentmicrosoftai-frameworkgradient-bootingai-train
Icon for item

Kiro

2025
Amazon

Kiro helps you do your best work by bringing structure to AI coding with spec-driven development.

ai-toolsai-codingvibe-codingamazon
Icon for item

CatBoost

2017
Yandex

Open-source gradient-boosting library from Yandex that natively handles categorical features and offers fast CPU/GPU training.

ai-developmentai-librarygradient-bootingai-train
Icon for item

Bolt

2024
StackBlitz

Prompt, run, edit & deploy apps

ai-toolsai-codingvibe-codingai-agent
Icon for item

Replit Agent

2024
Replit, Inc.

Automate the repetitive parts of coding, so you can stay focused on taking your idea to software.

ai-toolsai-codingvibe-codingai-agent

Computing Machinery and Intelligence

1950
Alan Turing

This is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

paperfoundation

The perceptron: a probabilistic model for information storage and organization in the brain

1958
Frank Rosenblatt

Frank Rosenblatt’s 1958 paper introduced the perceptron, a probabilistic model mimicking neural connections for learning and pattern recognition, laying the mathematical and conceptual groundwork for modern neural networks and sparking decades of research in artificial intelligence, despite its early limitations and later critiques.

paperfoundation

Learning Internal Representations by Error Propagation

1985
David E. Rumelhart, Geoffrey E. Hinton +1

This paper introduces the generalized delta rule, a learning procedure for multi-layer networks with hidden units, enabling them to learn internal representations. This rule implements a gradient descent method to minimize the error between the network's output and a target output by propagating error signals backward through the network. The authors demonstrate through simulations on various problems, such as XOR and parity, that this method, often called backpropagation, can discover complex internal representations and solutions. They show it overcomes previous limitations in training such networks and rarely encounters debilitating local minima.

paperfoundation

Keeping NN Simple by Minimizing the Description Legnth of the Weights

1993
Geoffrey E. Hinton, Drew van Camp

This paper proposes minimizing the information content in neural network weights to enhance generalization, particularly when training data is scarce. It introduces a method where adaptable Gaussian noise is added to the weights, balancing the expected squared error against the amount of information the weights contain. Leveraging the Minimum Description Length (MDL) principle and a "bits back" argument for communicating these noisy weights, the approach enables efficient derivative computations, especially if output units are linear. The paper also explores using adaptive mixtures of Gaussians for more flexible prior distributions for weight coding. Preliminary results indicated a slight improvement over simple weight-decay on a high-dimensional task.

foundation30u30paper

A Tutorial Introduction to the Minimum Description Length Principle

2004
Peter Grunwald

This paper gives a concise tutorial on MDL, unifying its intuitive and formal foundations and inspiring widespread use of MDL in statistics and machine learning.

foundation30u30papermath

Pattern Recognition and Machine Learning

2006
Christopher M. Bishop

The book coveris probabilistic approaches to machine learning, including Bayesian networks, graphical models, kernel methods, and EM algorithms. It emphasizes a statistical perspective over purely algorithmic approaches, helping formalize machine learning as a probabilistic inference problem. Its clear mathematical treatment and broad coverage have made it a standard reference for researchers and graduate students. The book’s impact lies in shaping the modern probabilistic framework widely used in fields like computer vision, speech recognition, and bioinformatics, deeply influencing the development of Bayesian machine learning methods.

foundationbook
  • Previous
  • 1
  • More pages
  • 10
  • 11
  • 12
  • More pages
  • 24
  • Next