[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [oc] MISCs and partially desinchronized networks



> > your "MISC Matrix" design is very interesting. Since you mention
> > storing programs in the local memories, I will suppose that you are
> > talking about a MIMD (multiple instructions, multiple data) design
> > instead of a SIMD like early Connection Machines. If that is the case,
> > then the lack of control instructions is strange.
> I don't quite understand in which direction are you pointing, but these
> MISC are intended to be SISD like RISCs, if I understand this abbreviations
> correctly. I fear that you misunderstood my ideas.

Since you have more than one ALU, you obviously have multiple data.
Some machines send the same (single) instruction to all processors in
each cycle. So if one is reading a word for its right neighbor and
adding it to its second register in that clock, all the others are
doing the same. This is very limiting (you can't do an if-then-else
based on the value of a register, for example!) so most practical
designs allow you to set a flag in each processor and have the
current instruction be treated as a NOP by those that have that flag
cleared. An external circuit feeds the instructions to all the
processors, so they don't need control instructions (jump, call and so
on).

Since you indicated in another message that each of your processors has
its own PC register, there are obviously multiple different
instructions being executed in each clock. So yours is a MIMD machine
(like a Linux Beowulf cluster, for example). But in that case, you need
control instructions to change the value of the PC in each processor,
don't you?

> > Your system is more like SCI, so you are on the right track as long as
> > you can keep it simple.
> I heard of these options before. I wanted to make a lot of simple processors
> and simple protocols rather than small number of fast ones.
> I think this processor tries to find maximum (overall) computation power per
> mm2.

It also makes good use of designer time, since you create a small
circuit and then replicate it 256 times on the chip. Having the same
amount of transistors all designed differently would be much more
costly.

> Only communication needed is communication between functions and
> accessing memory units. Most of communication is done with direct
> neighbours and FU0. Although this MISC matrix does calculate
> sequential programs much slower than other ideas you mentioned.

But you mentioned a "fractal design", which I understood to mean an
expansion of the network by hooking up the FU0s from different chips
together.

> Maybe it would be good idea to simulate this processor and find bottlenecks.

You can be sure of this. I did a lot of simulations of different
communication patterns in networks with different sizes, different
topologies and different routing algorithms and I can tell you that
there were many surprises. Einstein was right: "Make things as simple
as possible, but no simpler!".

-- Jecel