CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

* I’m somewhat familiar with Ramachandran’s mirror boxes by indirect

means years ago through an called Ghost Hound. It in part discussed Penfield, with

some neat images, and dealt with the idea of exposure therapy via VR; so I looked into

this and found out about ‘virtual’ mirror boxes.

Figure 1 Screenshot from Ghost Hound

This fed into my larger ideas of how imaginary worlds/narratives affect us

and can be used therapeutically. This is something that continually informs my thinking

with regards to Cognitive/Computational Creative Writing. Recently, the advent of

consumer VR has spawned some interesting ideas that I think extend from this; perhaps a

more obvious one is meditation in VR, taking advantage of the ‘presence’ in certain

settings, such as nature. They even use biofeedback headbands alongside VR to enhance

meditation.

1

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

A newer game called Lucid Trips features a locomotion system where you

essentially swim with your arms; while reading the Ramachandran I thought of the above

and wondered if this game or something like it can be used along with some biofeedback

techniques to ‘reprogram’ our brain maps in various ways. I suspect there’s already

plenty of literature on the topic. With programs like Unity, it’s surprisingly easy to create

one’s own VR environments, albeit simple ones, so I can imagine DIY virtual

therapies… Using a simple tutorial I was able to create a virtual ball in a small virtual

space in VR, and manipulate it with the HTC Vive controller; I couldn’t help but think

back to when I taught myself to juggle when I was younger, inspired by Gelb’s How to

Think Like Leonardo da Vinci; the idea was to enhance connectivity between the

hemispheres through ambidextrous skills (the prototypical example being the size of the

corpus callosa of pianists, I believe). Juggling virtual balls seems like it could be not just

more convenient (not knocking things over by dropping real balls), but enhanced,

manipulating the size, shape, color, mass, etc.

* While reading Bermudez on p. 24, regarding the intelligence of Turing

machines (defining it as unintelligent but also that in some ways it’s difficult to be more

intelligent, to paraphrase Bermudez), I thought of my feelings about AI and Go/Chess—

in light of AlphaGo and such; I’ve never seen a game of computer chess or computer Go

as being like a game against a human, because the computer lacks situated intentionality.

I feel that a ‘game’ arises through opposing minds, embodied and situated, and

intentional. A bout, in my opinion, against a computer, is actually just a single player

using software written by a team of programmers to solve computational problems within

the abstracted constraints of chess or Go rules, not a game. I suppose this has echoes of

2

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

Dijkstra and Turing’s quotes suggesting that the “Can machines think?” question is

meaningless, akin to “Can submarines swim?”; which brings to mind this paper (PDF

Warning) by Konagaya, et al., which concludes: “We suggest that even though today’s

machines may not be able to think, they can make us think and encourage us to strive for

new insights and knowledge.”

Bermudez’s prehistory, especially with regards to Chomsky’s influence, makes

me think about how modern linguistics has moved on from Chomsky on even the

fundamentals, with ideas like infinite recursion and the alleged hierarchical nature of

language being challenged (as seen in those two links). That is, we’ve already seen the

growing use of statistical learning in AI, which is also in vogue in linguistics, and now I

wonder how the very ideas of recursion and hierarchy being shifted might change AI

approaches. I know that Google and others have been seeing better results in their

machine learning tools with the dependency, rather than constituency, perspective on

grammar.

* Zelinsky’s eye-opening take on vision and the myriad influences on cognition,

visible and invisible, that we have makes me paranoid about VR, a bit—that is, what sorts

of side-effects might these goggles and virtual worlds have when our retinal system input

is so intimately encompassed? We might not be consciously aware of these side-effects,

and discovering them seems difficult without the kind of specialized knowledge of neuro-

optometry that Zelinsky describes.

* Going back to ‘intentionality’ a bit, in the Markus where intentionality is

important for transference/learning—and I believe there’s a remark on unfamiliar

languages, I found myself thinking about learning a new language with a different writing

3

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

system (morphographs like the Chinese characters rather than the phonographic alphabet)

as a sort of counterpoint to how you can’t ‘unsee’ what you’ve learned, in some cases.

While learning Japanese over the years (trying to, at least), I’ve had to make a deliberate

habit of being intentional in how I look at swaths of Japanese text, because it’s so

tempting to zoom out and see a mass of complex visual glyphs rather than meaningful

and phonological referents. Even when I’ve become very familiar with certain characters

and words, the phonological orientation of my English use biases me away from the

visual-spatial orientation of written Japanese. The less I use Japanese, the more I must

use intentionality to ensure more transparency to meaning/sound when reading…

* Some things that didn’t make it into my discussion posts, while reading Markus:

Autism and storytelling: that is, the use of narratives with dialogues to improve theory-of-

mind ability in autists. Rather than pointing to specific papers (e.g. on Bubble Dialogue),

this book on dialogic learning seems to capture some of these ideas. I minored in history,

focusing on the French Revolution and literacy (via Robert Darnton mainly), and as the

masses became more literate, imaginary dialogues became tools of Enlightenment

counterculture to feudalism. Specifically ‘philosophical pornography’—the works of the

Marquis de Sade featured lengthy philosophical conversations occuring amidst bawdy

sex acts, and these kinds of works went ‘viral’ in a sense, changing the popular mindset

which previously treated the clergy and nobility as sacred. Cognitive literary theory also

looks at the development of ‘free indirect speech’ in novels and its impact on readers

(e.g. increasing mind-reading ability), and we might conceptualize this as a development

similar to the Vygotsky/Piaget theory of the shift from ‘external/private speech’ to ‘inner

speech’ in children.

4

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

There’s also expressive writing in therapy. So it seems to me that the purposeful

use of dialogue with an emphasis on moving from explicit to implicit processes is a

recurrent theme in culture and cognition.

To delve deeper into “successive relearning” and Markus (I connected them while

reading), his talk of balance relates to the balance required through paradigms like

“desirable difficulties” and the “region of proximal learning”, as well as that popular

concept of “flow”.

The ability to quiet mental chatter (rather than reinforce it as with ‘white bears’) is

a benefit of “open monitoring” meditation, in my experience, by allowing you to monitor

and ‘let go’ of inner speech till it sort of dissolves (a great sleeping tool). At the same

time, mental chatter’s a useful tool, as we’ve seen above. This reminds me of the notion

of ego death vs. the utility of having a self-model as discussed by Thomas Metzinger: in

one sense, ego-death is the goal of ‘enlightenment’, but on the other hand, the centralized

“I” helps us navigate the world.

It’s hard to conceive of a decentered consciousness without that centralized self-

model, able to function for long in the world. The ability to reflect and adapt to a

complex environment I think relies on our own personal narratives; the trick is learning to

customize them flexibly, using not just the technology of language (if we look at

language and its multisensory manifestations of sign/speech/writing as technologies) but

others we develop. An idea I return to often is that as the world becomes more

technologized and malleable in a sense, it’s more important than ever to use

metacognitive awareness to cultivate these flexible personal narratives and filter

information according to the kinds of lives we wish to lead.

5

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

In the literature there’s some analysis of inductive vs. deductive styles of essays—

a stereotypical/traditional Japanese style is said to be inductive (really current use is more

of a hybrid), vs. a ‘Western’ deductive style. Senko Maynard discusses this here. The

idea of different approaches to those tests in DSM reminded me of that, I wonder if

similar approaches to counter rigidity in reading should be considered; there’s Padgett’s

idea of “creative reading” one might think of there. Perhaps we can systematize ways of

reading for targeted therapies, and/or design materials to nudge certain types of reading,

testing their effects in experiments.

Figure 2 From Padgett's 'Creative Reading'

6

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

I’m not sure of the intent behind the use of “I-Ching” in Markus’s DSM

description (“I-Ching exercises… “) but superficially it reminds me a bit of

“glyphomancy”, a Chinese practice (historically, at least) of visually decomposing the

characters into constituent meanings and attempting to divine the future that way.

Learning the characters has been suggested to increase intelligence due to their visual-

spatial complexity, actually: “… the massive practice in visuo/spatial processing and

memory seemed to provide an advantage in the communication between systems of the

mind causing increased general cognitive fluidity, expressed in higher intellectual

performance among the Chinese."

One of the authors of that paper, Andreas Demetriou, is fairly well-known for

Neo-Piagetian theories and the idea of ‘hypercognition’—Demetriou prefers (as noted in

this book) hyper- rather than meta- because of accuracy: meta- means “after” whereas

hyper- means “higher”; I suppose the idea is to focus on the parallel aspects.

Ages ago when minoring in French history, I read a bit of Zola and was taken

with his idea of “naturalism” in writing novels, where the idea as I recall it was

essentially to create the set/pieces in such a way that character behaviors and plots would

sort of generate themselves deterministically. Thus began the seed of an idea as a writer

which developed into a plan to create a program/AI tool I could create and and

collaborate with to generate novels by setting certain constraints and creating intelligent

agents; I’ve sort of been dancing around this plan as I learn more and more about

programming and computer science/cognitive science, et cetera, though I keep an eye out

for what others are up to, so I’ve been interested in the sort of ‘creative coding’/’creative

AI’ stuff that’s out there with regards to neural networks.

7

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

My interest is mainly in the horror genre. Some of my other plans have involved

‘hacking’ public domain works (the kernels for this began before all those literary

mashups took off in popular culture, but it was nice to see that people were embracing

such appropriation and the potential therein for original, rich creativity).

So imagine my surprise when encountering Dr. Elliott’s application of the

Affective Reasoner to Frankenstein! Scanning my old notes relating to my original ideas,

one thing I wanted to do was reverse engineer personality attributes through text analysis

to create bots (from texts written by historical figures).

Somewhat recently a popular post by Ross Goodwin was published on Medium

on creative collaboration with neural network AI, but my interest has been in a tool/AI

that would be authored more intimately by myself from the most basic code on up, and I

found that the approach by Goodwin seemed too much like tweaking a black box, so I’ve

been interested in other approaches, and this is another reason I like the “Morphing the

Monster” ideas.

Something I’ve wanted to experiment with or see others experiment with is

coding some sort of analog for the possible physiological underpinnings for affect that

researchers and theorists discuss, and then see what kinds of behaviors and ‘feelings’ are

generated, without actually modeling or specifying particular emotions in the way that

humans do—discarding the discretely labeled variables, so to speak. Thus allowing for

the possiblity that something ‘new’ might develop that might then be coded in terms of

affect, rather than trying to simulate top-down in AI/virtual reality the maps we create

and take as the territory in humans/corporeal reality.

8

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

But that “Morphing the Monster” paper and the AR present compelling ideas for

how new stories can be generated by simulating explicitly defined emotions, so I will be

re-reading and thinking about this.

The way Ben Novak describes abductive logic in terms of the ‘strange’ reminded

me of defamiliarization/ostranenie (‘making strange’) in literature (I’ve posted on the

forum about how cognitive literary theorists imagine the uncanny spaces created through

defamiliarization seem to allow for beneficial effects). I was recently reading about

uncertainty in AI and Gaussian processes, so incorporating that seems like it could be

interesting.

When discussing emotion decay on d2l, I added in my notes “an infinite system

generating finite emotions”, I suppose extending from the idea of emotions as sort of

‘unbounded’ (which I speculated on skeptically when discussing new research suggesting

that in willpower is infinite and ego-depletion is not replicable). That infinite-generating-

finite conceptualization was a riff on MAK Halliday’s reversal of Chomsky: Halliday’s

grammar, Systemic Functional Linguistics, sees things the opposite of Chomsky, who

describes language/grammar “a finite system generating an infinite body of text”—

Halliday sees it as “an infinite system generating a finite body of text” (boundless

potential vs. quantifiable instantiations).

Regarding AI policies, I wonder if we’ll end up with something like birth or

immigration policies (‘one child’ like in China) violated by hackers who attempt to

crowd the virtual ecology/bandwidth endlessly generating intelligences; I like to think

that intelligence amplification and artificial intelligence will be in lockstep over time and

that we won’t be able to ‘grow’ advanced AI without having a reasonable ability to

9

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

constrain and predict where/when/how they occur. Otherwise the only way to guarantee a

human-like mind might be to literally grow a brain.

Thinking recently via a discussion thread on coding abductive reasoning in AI and

tagging information with a chain of derivation, I was thinking that maybe the chains can

be retroactively reconstructed as necessary rather than stored? I suppose that sounds

pretty fuzzy and applies better as a general idea, but it seems to be how memory works:

we don’t retrieve exact copies so much as reconstruct them (Karpicke, 2012, PDF); in

models of iterative learning and cumulative culture regarding language, I repeatedly see

the idea not just of chunking/compressing but of input and output (where comprehension

and production are seen as a single system) containing the information necessary to

reconstruct the procedures necessary to process it. I was recently reading a piece by Lisa

Feldman Barrett that talked about conceptual combination that seemed to suggest it

works similarly, we’re continually reconstructing concepts.

I suppose this brings us into the realm of what precisely to store, if anything

besides neural patterns designed with the ‘intentional stance’ or whatever functionalist

‘design patterns’ we come up with as higher level aids; on the other hand, we get so

preoccupied with emulating the human mind, going back to neurodiversity and different,

‘alien’ conceptions of intelligence, perhaps we could leave in the symbolic models as part

of a hybrid system even if we reject the LoT concept; after all, a few million years of

literacy or near-future transhumanist modifications could end up allowing humans to

embed literal symbol systems in our mind-brains, based on how useful we find ‘writing’

for thinking via external cues creating a ‘long-term working memory’… Speaking of

long-term working memory, I recently encountered something new to me, Potter’s

10

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

‘conceptual short-term memory’. Semi-relatedly, there’s Ericsson & Kintsch’s long-term

working memory that Oatley mentions in this paper on ‘writing as thinking’ [PDF],

which allows us, with practice, to rapidly bring to bear various types of interconnected

knowledge from LTM while we write.

While reading a Patrick White novel a few years ago, The Vivisector, which was

lush with detail, I began thinking of ideas like this as well as the influence of color words

on perception, and began studying sort of conceptual/foundational spaced retrieval

flashcards (colors [e.g. “viridian”], plants [e.g. “thistle”], architecture [e.g. “dormer”]) to

allow for faster access to more detailed onboard knowledge while writing fictive worlds.

When dealing with this age of transactive memory (e.g. just Google everything instead of

memorizing it) but also an era where research on learning and memory shows the

effectiveness and efficiency of things like spaced retrieval (“successive relearning”), we

have interesting quandaries such as what and whether and how much knowledge to

‘store’ internally vs. externally for fast or slow access.

This experiment of mine has taken a back seat since starting the MS in CS at

DePaul but I shall return to it and see whether it helps. Oatley mentions in that paper that

the brains of taxi drivers in London (those with the ‘knowledge’ of London’s streets) are

different in certain regions, and wonders if writers have similar differences. If so, I

wonder if techniques like spaced retrieval can augment this further.

Reviewing my notes on vision systems differentiating between a toothbrush and a

baseball bat got me thinking about ‘mutual exclusivity’ in word learning somehow; I was

going to suggest, based on a study I read about a border collie using this process, that

11

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

perhaps AI could do it; a quick Google search turned up this article on an ‘embodied

neural network’ robot learning to do this at the same time as teaching toddlers to do it.

I wonder if the uncertainty and fallibility (I suppose this avoidance of perfect

certainty doesn’t need to be programmed if we’re dealing with weights/confidence levels

and probability) will increase creativity, after reading a paper on the role of entropy in

creative cognition (I was reminded of this in your remark on psi/’anti-entropy’), the idea

being that mitigating entropy caused by uncertainty causes restructuring and associative

thinking. While we’re on the subject of physics metaphors, there’s also the ‘theory of

cognitive spacetime’. The idea being that we should use this metaphor to get better at

understanding how spatial and temporal cognition are always linked; given the

differences [PDF p. 6] of literate cognition vs. oral cognition, perhaps it’s not too soon to

start thinking about the aforementioned hybridity of symbolic/network approaches to

prep for a world of cybernetic brains…

This didn’t make it into my public comment on knowing/certainty:

Perhaps we can bring this back to the visuospatial/motor aspects by relating it

to Lakoff's 'embodied' ideas about how the mind works, and the notion that when you

attempt to negate a 'frame' you reinforce it (the ironic process problem which can be

avoided through the processes involved in meditation). The idea being that you assert

your own frames instead of negating others.

Perhaps the ability to 'step back' and reason about/adjust metaphors as

tools [PDF] in a 'grounded' world is necessary here to model and evaluate the usefulness

of variously detected or asserted possibilities.

12

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

I couldn't help but think of the idea of storytelling and rationalizing decisions;

often the more compelling story where a possible sequence of events can fall into place,

like a particularly elegant solution to a puzzle, can provide the impetus to finalize a

decision, I think.

Perhaps we want AI to tell themselves compellingly uncertain stories about their

own possible actions which are as much about 'beautifully' completing puzzles/patterns as

they are about practical goal attainment. Maybe OCD rituals are a form of such stories

intended to give a sense of symmetric resolve about compulsive thoughts. In this light,

what appears asymmetric or dissonantly uncertain can be seen as compellingly always-

potentially-symmetric.

Thinking about abductive reasoning/satisficing in hybrid systems, I wrote: ...

perhaps adversarial approaches help here, displacing human labelers...

I wrote this during one of the lectures, still trying to figure out what I meant:

“apparently not a lot of inter-item comparison with ai vision systems -- e.g. realizing a

toothbrush is too small to be a baseball bat, or seeing a few examples of an animal and

distinguishing it from every other or most other animals at the same time as knowing it's

part of that type of animal class” – Maybe I meant some kind of organizing ontology like

SUMO? I had “semantic/declarative knowledge representation” appended to it, also. I’m

not sure whether I did the right thing in correlating ‘content theory’ with ‘ontology’, by

the way. I did a search and found this paper [PDF] that made me think I was on the right

track.

13

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

When thinking about “programming fallibilism” I wondered if creating

‘imperfect’ systems deliberately might be both necessary and also presenting an ethical

dilemma, perhaps even against the law somehow, like if we were capable of making

‘perfect’ humans but chose not to (the disturbing waters of eugenics)… we might have to

ensure/debate a certain level of perfection as AI become more like the equivalent of lab

animals in terms of ethics discussions over the next what, 10-15 years? Debates about AI

neurodiversity…

I suppose I touched on this earlier, but also in my notes I have a comment about

how perhaps AI need to use their own language: one that’s evolved to be transmissible to

them based on their ‘cognitive architecture’, rather than the languages we use which have

culturally evolved for us (language has adapted [PDF] to our brains rather than vice

versa, according to most contemporary evolutionary linguists).

I think recently skimming through Smith & Kirby’s well-known cultural

cumulation/iterated learning paper and also reading about the supposed ‘interlingua’ that

DeepMind developed got me thinking about this (I also might connect the ‘interlingua’ to

wondering whether a symbolic system might emerge from deep learning approaches after

sufficient complexity is reached, the interlingua as a proto-symbol system, of sorts).

Perhaps humans can attempt to learn these languages, affecting their way of

viewing the world as discussed in various linguistic relativity articles and papers, such as

the one I linked on the forums. Maybe bots could generate different languages expressly

for this purpose. Most of the paper abstracts I read from Evolang conferences over the

years, as well as folks like Luc Steels (re: robotics) seem to agree the embodied

component is important in the process.

14

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

Thinking about AI as animals, another ethical point to consider is nociception/

pain. Should it be mitigated somehow? In humans, it serves a purpose, yet it can go

awry, I think, is it really evolved to be optimally efficient? Might we perfect it in

synthetic beings, make it so that they feel only as much pain as is necessary to alert them

to damage to their systems/bodies? We don’t really know enough about the concepts to

understand how its removal/adjustment would affect highly cognitively complex beings, I

imagine.

On the topic of approximate computing and satisficing in AI which I mentioned in

discussions, I found myself making a note to research streaming databases and AI, and AI

on distributed computing platforms. I did find that TensorFlow seems to have some

tutorials for distributed processing.

Over the years I’ve read a lot about neuroplasticity and was aware that adult

neurogenesis occurs, but I didn’t realize how limited it seems to be to I think two areas:

the hippocampus (dentate gyrus) and olfactory bulb. Of course, the potential impact on

learning and memory from hippocampal neurogenesis seems to be very high. Still, it does

place the emphasis on neuroplasticity, I think, on achieving effects through new

connections in pre-existing areas, rather than on new neurons. I see here a snippet about

the computational effects of neurogenesis.

We’re constantly questioning fundamental aspects of the brain, so who knows

what potential we have without neurogenesis, regardless. For instance, a new paper

suggests the brain is “10 times more active than previously measured”—apparently the

dendrites are more active than thought and generate their own spikes, 10 times more than

spikes from the cell bodies. The suggestion is that the dendrites are more analog than

15

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

digital, more like quantum computers, so this seems like it could have quite an impact. In

a recent Research Colloquium presentation on real-time embedded systems by Zonghua

Gu, I encountered the idea of SNNs, spiking neural networks. I wonder how these might

be impacted. Apparently the neuromorphic TrueNorth processor uses SNNs, but I wonder

if quantum computing is required, as the paper’s author(s) suggest. If so, perhaps

distributed neural networks are required, given the cooling requirements (coldest places

in the known universe, I believe?). So the infrastructure might really narrow the

possibilities of strong AI proliferation… NASA wants to put a similar cooling system in

space, I think. Skynet… ?

Having said that, I’ve always thought that focus on ‘binary’ electrical pulses was

misguided given the presence of chemicals/neurotransmitters; much like the neglect of

‘affect’ in neuroscience, which I felt made things more ‘analog’ anyway. There’s a quote

I like from David Mumford: “The world is continuous, but the mind is discrete.” If we

consider the brain as part of the world in that sense, I think it works.

• Interesting way we adjust to computer vision: with Google’s ‘not a bot’ check,

if it says “click the images with grass”, you’ll give the wrong answer if you

spot grass in th corner of a picture and select that picture. You have to factor

in the flaws in the system, adjusting your ‘theory of mind’ for the ‘AI’, so to

speak.

Something that came to mind during lectures when discussing psi was a video I

watched about 10 years ago, I think on either the feelSpace or Sensebridge belts, which

buzz when facing north, and after some time wearing them seems to cause rewriting in

the brain to integrate the technology and give the user a sixth sense, of sorts. We know

16

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

that when learning, say, braille, there’s a rapid reorganization in the brain, sighted or

otherwise. I was thinking during lecture that perhaps using technology to create our own

version of psi would help unravel any mysteries such as those suggested by Bem’s work

by allowing us to see ‘extra’-sensory processes in action in a way we understand because

we induced it in a controlled fashion.

Relatedly, there’s the idea that we shouldn’t think too strongly of fixed sensory

modality in working memory, because sign language supposedly reveals how

sensorimotor coding can engender an articulatory loop, rather than ‘phonological’ (often

used synonymously due to phonocentrism, I think). Similarly, Christiansen & Chater

seem to view (noted here in section R2) short-term memory as featuring interference

problems rather than “slots”/”buffers” that are limited in capacity (the magic number).

Robert Bjork has a ‘storage strength’ and ‘retrieval strength’ model of memory, where

memories become difficult or nigh-impossible to retrieve, but aren’t actually forgotten.

There’s also “hierarchical process memory” where “all cortical circuits can accumulate

information over time” and “memory is not restricted to a few localized stores”.

C&C’s approach to STM seems to yield good results in their computational

models, and I think likewise flexibility in working memory and long-term models rather

than very precise formulations is good, as we hardly have an idea of how it works in

ourselves. But Bjork’s theory, which seems reasonable, presents a problem, I think, in

terms of permanent storage of massive amounts of information that may not be accessed.

Does one choose not to reflect this in humans? If so, we can’t expect a mind to emerge

that is human-like, I think. Perhaps one of those humans who are blessed/cursed never to

forget.

17

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

It was Sidney Harris who wrote the comic about the scientific process, ‘then a

miracle occurs’—I remembered this in lecture but neglected to mention on the forum:

Before xkcd, there was Sidney Harris. One of my favorites, speaking of

neuromorphic chips and such, is this:

18

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

Makes one wonder what sort of ‘drugs’ an AI would enjoy, if any. Rather than

generating new molecules for humans, they could manufacture their own altered states

enclosed in tiny simulations partitioned off from their main processes. Perhaps such

19

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

simulations could be used to inform humans what cognitive effects a drug might have; we

might use tools such as this to ‘unbox’ the black box.

It’s hard not to see the hardware and software as being inextricably intertwined;

somehow while researching approximate computing I stumbled on this page; the

aforementioned research colloquium on embedded systems and neuromorphic chips

seems to show how very low-level thinking about software on the hardware suitable for

these computations are essential. Will being versed in quantum physics become

increasingly necessary in cognitive science and AI?

If so, perhaps higher level unifying theories will be useful: lately I keep reading

about things like homotopy type theory and “bootstrapping” to get at the root of these

foundational disciplines and allow for conceptual bridges and frameworks we might

otherwise find impossible. Their connections to programming might make them

accessible without advanced degrees, also. Our notation systems truly are profound tools.

I mentioned on the forum that ‘even with language’ we use the procedures

encoded in the language to learn it; I was somewhat referencing Kirby and Smith’s work

suggesting language contains the mechanisms for its own transmission within itself, but

also there’s a debate [PDF] in linguistics on implicit vs. explicit grammatical instruction.

Those who believe in an innate language module and universal grammar don’t generally

seem to believe in the utility or validity of explicit grammatical instruction, the

proceduralization of rules, to be possible. This came to mind when reading Pylyshyn on

‘compiled transducers’. I think most research suggests we can use both means of learning

such that the results are indistinguishable.

20

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

Research looking at the event-related potentials (N400 for vocabulary, P600 for

grammar, I think is the standard) seems to indicate that the brain begins to look the same

in some core ways regardless of the method used, if learned to a high enough level for

proficiency to be indistinguishable between ‘native’ and ‘non-native’ learners, even if

monolingual and multilingual brains are inevitably different in many ways. So there’s

some functionalism/multiple realizability there, I suppose.

The recent arrest of John Rayne Rivello for tweeting an image intended to induce

a seizure in Kurt Eichenwald and kill him I somehow connected to the idea from the

older anime (I haven’t seen the recent adaptation[s]) where cybernetic

brains were hacked. I suspect that future laws might be tied to precendents such as this,

where the crime is virtual and cognitive/neural in origin, transmission, while the locus of

effects results in corporeal casualties.

I like the Bermudez text, but I do find the way obsolete ideas are often presented

in a way that seem uncritical; the style seems to be to present them somewhat

chronologically and as if they weren’t outmoded or refuted, only later suggesting

alternatives. My bugbear is Chomskyan linguistics, so I would hate for anyone to read the

text and think his ideas stood unchallenged.

My first encounter with Bem’s research was a few years before I began the MS in

CS degree, and my first thought was to write a fictional paper in the APA style, taking

seriously the idea of the supernatural. At the time I was so surprised to see a

‘parapsychology’ sort of work that seemed so robust. It’s interesting to look at it now

with a hopefully more informed eye. In a way it reminds me of Libet’s work which

seemed to have implications about free will: based on my understanding of

21

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

consciousness, I always assumed it was a non-issue that could be explained as self-

consciousness being slower than non-self-consciousness (“unconsciousness”,

traditionally), emphasizing a model more like Dennett and Dehaene’s (e.g. global

neuronal workspace, which I think is discussed in Bermudez) where (self-)consciousness

is “temporally smeared” across the brain.

I suppose it’s clear that I am more partial to the views of embodied cognition

[PDF] and neural networks than I am of a mentalese/LoT. At the same time, perhaps I am

naïve, but I think the ideas are compatible, and that initially non-symbolic processes can

iterate upwards into symbolic and eventually natural-linguistic (as in English, etc.)

processes. I’ve been struggling with the idea of a hybrid system but I think through this

class I’ve a clearer idea of how it might work, as I’ve discussed here and there.

Even though we can dissociate the ways we process syntax and semantics in the

brain at a certain level of focus, actual language use requires a coordination of the two

and my impression is that meaning is the dominant force when push comes to shove.

MAK Halliday has talked about “lexicogrammar” as a continuum, and I think this is the

right idea. Posing dualities and taking them as granted in theorizing seems to lead to no

end to intractable “hard problems”, “symbol grounding problems” ,”mind-body”

problems, etc. Great for the academic journal industry, but not so much for scientific

progress.

There’s an article up for debate about how we think if the brain/mind. While I

think the information processor model is very useful, the article does highlight the idea in

general of how useful our metaphors for the brain are. When studying French history a

long time ago I read snippets of La Mettrie’s Man a Machine which somehow led me to a

22

CSC 587, Winter 2016-2017 Ideas file, 5567 words Erik McGuire

site that archived historical metaphors for the mind, which really put things in

perspective. Epstein, the author of the article, seems to want a metaphor-free dynamic,

but I think we might already be at the stage where we’re basically making the metaphor

real, by creating intelligent information-processing, symbol-manipulating systems and

integrating our mind-brains, whatever they really are, into them.

23