<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>CBMM Publications - Other</title>
<link>https://hdl.handle.net/1721.1/111653</link>
<description/>
<pubDate>Sun, 05 Apr 2026 18:40:00 GMT</pubDate>
<dc:date>2026-04-05T18:40:00Z</dc:date>
<item>
<title>A Definition of General Problem Solving</title>
<link>https://hdl.handle.net/1721.1/126147</link>
<description>A Definition of General Problem Solving
Liao, Qianli
What is general intelligence? What does it mean by general problem solving? We attempt to give a definition of general problem solving, characterize the common process of problem solving and provide a basic algorithm that can in principle solve a wide range of novel tasks. Specifically, we represent general problem solving as a information/data conversion task that can be solved by finding dependencies/explanations. We propose “Object-Oriented Programming”, a general reasoning framework with object-centric operations that solves problems in a human-like goal-driven fashion, guided by information, compositionality and general theories of objects, instead of merely via large scale searches.
</description>
<pubDate>Mon, 13 Jul 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/126147</guid>
<dc:date>2020-07-13T00:00:00Z</dc:date>
</item>
<item>
<title>Flexible Intelligence</title>
<link>https://hdl.handle.net/1721.1/125866</link>
<description>Flexible Intelligence
Liao, Qianli
We discuss the problem of flexibility in intelligence, a relatively little-studied topic in machine learning and AI. Flexibility can be understood as out-of-distribution generalization, and it can be achieved by converting novel distribution into known distributions. Such conversions may play the role of knowledge and is accumulated in the intelligent system, leading to human-like learning and generalizations.
</description>
<pubDate>Thu, 18 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125866</guid>
<dc:date>2020-06-18T00:00:00Z</dc:date>
</item>
<item>
<title>Universal Format Conversions</title>
<link>https://hdl.handle.net/1721.1/125680</link>
<description>Universal Format Conversions
Liao, Qianli
Information is the fuel for intelligence. Any competitive intelligence system should be information hungry. “Formats” on the other hand, is the container for information. Accessing information without the ability to&#13;
decipher its format is like drinking without a container.&#13;
From this perspective, current machine learning systems arguably have zero general information processing capability because it cannot really handle information that is presented differently from their standard input format, even if the changes are completely trivial and regular. Humans however have no trouble understanding reasonable changes at all. Thus, there is an unexplored research area to make machines understand formats in a flexible way like humans do. As the first step in this direction, we propose a task called Universal Format Conversions (UFC): a task that is designed to test a system’s ability to understand formats and convert between any formats of data by just observing a few examples. This requires an intelligent system to extract useful information (“read”) and convey knowledge (“write”) with novel data structures and text with minimal training, leading to the ability to “communicate” flexibly in the format of structured data, artificial expressions and even natural language. Furthermore, we note that an ideal intelligent system should go beyond working with pairs of formats — it should discover interesting information by only looking at one format, namely possessing a zero-shot pattern discovery ability. Finally, solving UFC would directly lead to real-world breakthroughs in programming since enormous amount of time of all programmers is spent on converting all types of ad hoc data structures and formats.
</description>
<pubDate>Fri, 05 Jun 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/125680</guid>
<dc:date>2020-06-05T00:00:00Z</dc:date>
</item>
<item>
<title>Universal Metaphysics</title>
<link>https://hdl.handle.net/1721.1/123331</link>
<description>Universal Metaphysics
Liao, Qianli
The development of natural science especially physics allows us to understand to a large extent the material world. However, the world also contains a large amount of concepts that are non-material and abstract, which are often poorly described by our language, let alone being well understood. In order to provide a comprehensive and coherent account of the structure of the world, we argue that it is important to create an explicit system and language to describe the composition and working of the world, especially the non-material components. This is reminiscent of the goal of the millennia-old subject metaphysics. Yet instead of focusing on isolated topics like most existing metaphysical studies, we argue it is beneficial to develop a roadmap for metaphysics (or mind) — a unified and coherent theory of what exist in the world, how to describe them, how they interact and how they are organized. Such development might lead to new insight into research in the science and engineering of intelligence and perhaps also how we view the world.
</description>
<pubDate>Tue, 31 Dec 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/123331</guid>
<dc:date>2019-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Technical Report: Building a Neural Ensemble Decoder by Extracting Features Shared Across Multiple Populations</title>
<link>https://hdl.handle.net/1721.1/122041</link>
<description>Technical Report: Building a Neural Ensemble Decoder by Extracting Features Shared Across Multiple Populations
Chang, Chia-Jung
To understand whether and how a certain population of neurons represent behavioral-relevant vari- ables, building a neural ensemble decoder has been used to extract information from the recorded activity. Among different ways to decode neural ensemble activity, the parametric approach requires assumption of the spiking distribution and an underlying encoding model, which poses challenges for neurons with nonlinear, multi-modal, and complex receptive fields. Alternatively, non-parametric framework assumes no explicit probability distribution and discovers patterns from the data in an unbiased way, and thus training a machine learning model as a decoder has gained its popularity in the field. However, machine learning models require a big-enough dataset, yet the data size is often small due to limitations in recording techniques. Although increasing the number of subjects help increase the size of the overall training set, how to concatenate recorded ensemble activity across subjects while preserving their spatial-temporal structures is not trivial. In this technical report 1, a novel way to extract features shared across populations from multiple subjects to train a machine learning model is described. With this feature extraction framework, one can easily test upon different hypothesis of the underlying coding strategies. In addition, several common issues in applying a machine learning model to decode neural activity has been discussed. Overall, this report provides a rigorous protocol for applying machine learning models to decode a relatively small dataset - neural ensemble activity collected across multiple populations.
</description>
<pubDate>Thu, 05 Sep 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/122041</guid>
<dc:date>2019-09-05T00:00:00Z</dc:date>
</item>
<item>
<title>The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors</title>
<link>https://hdl.handle.net/1721.1/120056</link>
<description>The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors
O'Brien, Nicole; Latessa, Sophia; Evangelopoulos, Georgios; Boix, Xavier
The digital information age has generated new outlets for content creators to publish so-called “fake news”, a new form of propaganda that is intentionally designed to mislead the reader. With the widespread effects of the fast dissemination of fake news, efforts have been made to automate the process of fake news detection. A promising solution that has come up recently is to use machine learning to detect patterns in the news sources and articles, specifically deep neural networks, which have been successful in natural language processing. However, deep networks come with lack of transparency in the decision-making process, i.e. the “black-box problem”, which obscures its reliability. In this paper, we open this “black-box” and we show that the emergent representations from deep neural networks capture subtle but consistent differences in the language of fake and real news: signatures of exaggeration and other forms of rhetoric. Unlike previous work, we test the transferability of the learning process to novel news topics. Our results demonstrate the generalization capabilities of deep learning to detect fake news in novel subjects only from language patterns.
</description>
<pubDate>Thu, 01 Nov 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/120056</guid>
<dc:date>2018-11-01T00:00:00Z</dc:date>
</item>
<item>
<title>Representations That Learn vs. Learning Representations</title>
<link>https://hdl.handle.net/1721.1/119834</link>
<description>Representations That Learn vs. Learning Representations
Liao, Qianli; Poggio, Tomaso
During the last decade, we have witnessed tremendous progress in Machine Learning and especially the area of Deep Learning, a.k.a. “Learning Representations” (LearnRep for short). There is even an International Conference on Learning Representations.&#13;
Despite the huge success of LearnRep, there is a somewhat overlooked dimension of research that we would like to discuss in this report. We observe there is a chicken-and-egg problem between “learning” and “representations”. In the view of traditional Machine Learning and Deep Learning, “learning” is the “first-class citizen” — a learning system typically starts from scratch 2 and the learning process leads to good “representations”.&#13;
In contrast to the above view, we propose a concept “Representations That Learn” (RepLearn, or Meta Learning). one can start from a “representation” that is either learned, evolved or even “intelligently designed”. Unlike a system from scratch, this representation already has some functionalities (e.g., reasoning, memorizing, theory of mind, etc., depending on your task). In addition, such a representation must support a completely new level of learning — hence we have a “representation that learns”.&#13;
Furthermore, one can go more extreme in this direction and define “Hyper-learning” — multiple levels of repre- sentations are formed. Each level of representation supports a level of learning that leads to the representation of next level. Note that this is different from building multiple layers of deep neural networks. Instead, it is similar to how an operating system is implemented: an OS have at least three levels of representations: electrical signals on transistors, machine language, high-level language.&#13;
We believe RepLearn is similar to how human learns — many representations in our brain are formed before any learning happens (i.e., genetically coded). They serve as prior knowledge of the world and support one level of high-level learning (e.g., memorizing events, learning skills, etc.).
</description>
<pubDate>Mon, 31 Dec 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119834</guid>
<dc:date>2018-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>When Is Handcrafting Not a Curse?</title>
<link>https://hdl.handle.net/1721.1/119833</link>
<description>When Is Handcrafting Not a Curse?
Liao, Qianli; Poggio, Tomaso
Recently, with the proliferation of deep learning, there is a strong trend of abandoning handcrafted sys- tems/features in machine learning and AI by replacing them with “end-to-end” systems “learned from scratch”. These learning paradigms have achieved tremendous success. Researchers show that learning based algorithms are general — they can be applied to new domains and achieve good performance. In contrast, handcrafted systems are becoming the machine learning new “taboo” that is repeatedly criticized in recent papers. Merely motivated by the idea of critical thinking, we ask this question: are handcrafted systems really always a curse? is there any hidden merit of it?&#13;
&#13;
In this short report, we discuss when handcrafted systems can in principle be used to solve tasks in new domains. We also discuss why sometimes handcrafted systems can be preferred.
</description>
<pubDate>Mon, 31 Dec 2018 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/119833</guid>
<dc:date>2018-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Spatial IQ Test for AI</title>
<link>https://hdl.handle.net/1721.1/113004</link>
<description>Spatial IQ Test for AI
Hilton, Erwin; Liao, Qianli; Poggio, Tomaso
We introduce SITD (Spatial IQ Test Dataset), a dataset used to evaluate the capabilities of computational models for pattern recognition and visual reasoning. SITD is a generator of images in the style of the Raven Progressive Matrices (RPM), a common IQ (Intelligence Quotient) test used to test analytical intelligence. RPMs are purely visual, and require little prior knowledge. RPM tests the users ability to derive abstract rules and patterns from a set of images.&#13;
For the last 100 years, humans have evaluated intelligence us- ing standardized intelligence quotient exams. These tests examine different aspects of intelligence, including verbal, quantitative reasoning, and spatial reasoning ability. In the field of AI, there exists few intelligence established metrics beyond the Turing Test (TT) and the Total Turing Test (TTT). Thus, SITD makes for a useful dataset researchers can use to divide and conquer the task of creating ’intelligent’ machines.
</description>
<pubDate>Sun, 31 Dec 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/113004</guid>
<dc:date>2017-12-31T00:00:00Z</dc:date>
</item>
<item>
<title>Human-like Learning: A Research Proposal</title>
<link>https://hdl.handle.net/1721.1/111654</link>
<description>Human-like Learning: A Research Proposal
Liao, Qianli; Poggio, Tomaso
We propose Human-like Learning, a new machine learning paradigm aiming at training generalist AI systems in a human-like manner with a focus on human-unique skills.
</description>
<pubDate>Thu, 28 Sep 2017 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://hdl.handle.net/1721.1/111654</guid>
<dc:date>2017-09-28T00:00:00Z</dc:date>
</item>
</channel>
</rss>
