This post explores why physical systems’ “complexity” rises, peaks, then falls over time, unlike entropy, which always increases. Using Kolmogorov complexity and the notion of “sophistication,” the author proposes a formal way to capture this pattern, introducing the idea of “complextropy” — a complexity measure that’s low in both highly ordered and fully random states but peaks during intermediate, evolving phases. He suggests using computational resource bounds to make the measure meaningful and proposes both theoretical and empirical (e.g., using file compression) approaches to test this idea, acknowledging it as an open problem.
Stanford’s 10-week CS231n dives from first principles to state-of-the-art vision research, starting with image-classification basics, loss functions and optimization, then building from fully-connected nets to modern CNNs, residual and vision-transformer architectures. Lectures span training tricks, regularization, visualization, transfer learning, detection, segmentation, video, 3-D and generative models. Three hands-on PyTorch assignments guide students from k-NN/SVM through deep CNNs and network visualization, and a capstone project lets teams train large-scale models on a vision task of their choice, graduating with the skills to design, debug and deploy real-world deep-learning pipelines.
This tutorial explores the surprising capabilities of Recurrent Neural Networks (RNNs), particularly in generating coherent text character by character. It delves into how RNNs, especially when implemented with Long Short-Term Memory (LSTM) units, can learn complex patterns and structures in data, enabling them to produce outputs that mimic the style and syntax of the training material. The discussion includes the architecture of RNNs, their ability to handle sequences of varying lengths, and the challenges associated with training them, such as the vanishing gradient problem. Through various examples, the tutorial illustrates the potential of RNNs in tasks like language modeling and sequence prediction.
This tutorial explains how Long Short-Term Memory (LSTM) networks address the limitations of traditional Recurrent Neural Networks (RNNs), particularly their difficulty in learning long-term dependencies due to issues like vanishing gradients. LSTMs introduce a cell state that acts as a conveyor belt, allowing information to flow unchanged, and utilize gates (input, forget, and output) to regulate the addition, removal, and output of information. This architecture enables LSTMs to effectively capture and maintain long-term dependencies in sequential data