Discussion:
dram circuits
(too old to reply)
Adnan Aziz
2004-11-09 22:50:37 UTC
Permalink
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.

the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.

- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?

- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)

cheers,
adnan

ps - pls reply to the newsgroups, or send me mail at adnan at
ece_nospam . utexas . edu_DELETE (the ***@hotmail.com acct is
long gone)



-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
glen herrmannsfeldt
2004-11-10 00:20:19 UTC
Permalink
Adnan Aziz wrote:
(snip)
Post by Adnan Aziz
- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)
Since read is destructive, and it has to write it back anyway, yes,
it would seem that writes should be faster. For a synchronous system,
though (did you say SDRAM?) it may not matter.

-- glen
Marcus Schaemann
2004-11-10 13:00:35 UTC
Permalink
Post by Adnan Aziz
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.
- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?
Hello,

I think the bitline has to be pre-charged to V_DD/2 because of the
Read/Write-Amplifier (operates like a SRAM-cell).

The bitline (and another, not used bitline) is both precharged to V_DD/2
and these two are the inputs to an SRAM-cell. If now the selected DRAM
cell changes the voltage of the bitline (this change is very small
because of the small capacity of the DRAM cell compared to the capacity
of the bitline) the SRAM cell will switch to the correct maximum
voltages and thus writes back the information to the DRAM cell.

Regards,

Marcus
Bjørn B. Larsen
2004-11-12 14:48:30 UTC
Permalink
Post by Adnan Aziz
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.
the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.
- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?
When you read the bit, you check if the C has any charge or not. Remember
that the C of the bitline is significantly larger than the C of the trench
capacitor.

When you read you may not observe if the bitline chnges to VDD or to GND but
rather which direction it moves. It will not reach VDD neither GND on the
read alone.

When the read is done, the voltage on the trench capacitor is close to
VDD/2. That is why it is a destructive read and you need to rewrite the
contents of the cell.
Post by Adnan Aziz
- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)
As soon as the bitline moves in any direction, the op.amp. will respond. On
a write you must wait til the bitline and trech capacitor are fulle charged
or discharged. (I think that may be the reason.)

--------------------
Have a great day!
Bjørn BL.
john jakson
2004-11-13 16:33:20 UTC
Permalink
Post by Adnan Aziz
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.
the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.
- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?
- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)
cheers,
adnan
ps - pls reply to the newsgroups, or send me mail at adnan at
long gone)
-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
Both those books are very good.

The other book you could add would be

L. Glasser and D. Dobberpuhl, The Design and Analysis of VLSI
Circuits. Reading, MA: Addison-Wesley, 1985.

which I consider to be the more serious circuit design book, perhaps
the need for less circuit design than logic/system has left it behind.

Q1
Anyway the answer to 1/2vdd bit line precharge is very simple.

In the older DRAMs the line was charged to VDD or even VSS. The sense
amp is a cross coupled 2T Nflop with some extra devices for access and
equilibration and precharging. The cross couple though has exponential
gain if its common source floating sense gnd is slowly pulled to true
gnd along an exp path which is achieved by a small & big nmos
succesively pulling maybe 64 or more sense amps to gnd.

Its important that all sense amps be similar and not interfere with
each other even though they all share a common sense drive line. In
cmos the same is also true for the top side since there may also be a
cross coupled P flop, but the N side is of more importance.

Now when the bitlines were charged to vdd, the opened bit cell
contributes a very small charge to selected bitline. The other side of
the flop needs to be in the center of the eye and the only way that
could be achieved was if the other side was given a reference charge
exactly half that of the other side max change.

Conundrum, if the bit cell is as small as possible for maximising no
of cells in array, how can you make a ref cell half that size. Well
you can't do it reliabibly and most techniques at the time did wierd
and wonderfull things to fake it. One scheme involved using a normal
cell always charged with a 0 and dumping it onto 2 adjacent bitlines.

If the bit line is precharged to mid level, then the charge in the
cell will nudge the line about the same Vdif either way and the other
side of the sense amp need not have a ref charge since the other bit
line is already centered.

Another reason is that the access T is an Nmos device so if it is
taken to Vdd on its wordline gate, it is fully on allowing the charge
to fully transfer on or off to the bitline since the Vt is << VDD/2.

Q2
DRAMS don't perform writes at all in the classic sense because they
open up a line and only change 1 bit out of maybe 64 or 256 etc. Every
write cycle is exactly same as a read cycle with a minor change.

When the entire row has been read, the act of sensing the row bits
into the array of senseamps is always followed by a wait time so that
the fully restored data on the sense amp which is now VDD/VSS is now
passed back into the bitcells.

Initially during the sense, the sense amp flops are sampling the
bitlines with a very small charge. The flops though are voltage
coupled to the bitlines but capacitively are decoupled in the
following sense. The bitlines have enormous C and move slowly. The
sense amps have very little self C but connect to the bitlines through
smallish Nmos acess devices which separate it from the big C. During
the amplification phase when the 2 driver sense transisters apply the
exponential down signal to the common sense gnd of the amps, the sense
amp nodes still move very slowly so as not to disturb the bitlines Vs.
When the amp has a significant margin, the amp can now safely drive
out current to the 2 bitlines.

It really helps to run a Spice simulation of this to see how the Vs
move around and come back into the cell. Bit too difficult to explain
in words.

The write part
Now all the bits are refreshed whether 1 particular bit is needed or
not. Sometime during the sense phase, we can dump the desired write
bit via the column read muxing circuit into the specific sense amp
that is reading desired cell.

In essense a write cycle is always a read cycle with the read path
flowing backwards to disturb the sensing so as to put desired data
into cell. It can't get any simpler than that.

Hope that helps,

John Jakson
johnjakson_usa_com

(unemployed old time VLSI circuit designer that sometimes wouldn't
mind being asked to design chips again)
john jakson
2004-11-17 15:30:06 UTC
Permalink
Post by john jakson
Post by Adnan Aziz
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.
snipping

Forgot to add that charging to VDD/2 uses half the power v VDD
precharging, ie in any driven block, all the bitlines will cycle
through VDD/2 to VDD or VSS and back to VDD/2. But in the VDD full
precharge designs, half the bitlines must cycle the full supply. 2/4
<< 1. I'm not even sure if it needs to be exactly VDD/2 either, as
long as paired bitline lines have same V and are near to the mid
level.

Also after the data is fully amplified, the bitlines need to be
reequilibrated to the ref VDD/2, which is trivial, just short the 2
bitlines after wordline is low and let charge sharing do the work, no
current needed from the supplies. This can be very quick as a large
nmos switch can be packed between the 2 lines and doesn't need fat
metal VDD supply. The sense amp is where the heavy I tracking would
be. In the older VDD ref designs, there would be a large I spike as
the '0' bitlines had to be recharged to VDD with no internal C to draw
I from, ie a large VDD I spike and also fatter supplies around the
chip to meet the migration limit.

I also suspect that if all the bitlines are typically near VDD/2 most
of the time except during the brief bank selected cycles, then the
stress on the unselected row line devices is minimized and also less
subthreshold leakage.

Absolutely no thanks needed
Post by john jakson
Hope that helps,
John Jakson
johnjakson_usa_com
(unemployed old time VLSI circuit designer that sometimes wouldn't
mind being asked to design chips again)
d***@edgehp.net
2004-11-19 02:45:23 UTC
Permalink
Post by Adnan Aziz
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.
the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.
- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?
John Jakson got it right about the power being the reason to use
VDD/2 sensing. But in practice, it hasn't always worked well to
count on the VDD/2 bitline precharge to eliminate the need for
dummy cells. Dummy cells help keep the bitlines better balanced
during sensing, rather than one side being heavy by the capacitance
of one cell. In addition, wordlines couple into the bitlines more
from an active cell than an off cell, so having a dummy cell
equalizes coupling noise.

To get a 1/2 level dummy cell, you simply have a dummy cell with
a back-door gate, and put a 1/2 level into it. There are several
ways to do this. One is to open the unused reference wordline after
sensing is complete, so a dummy cell is attached to the bitline on
each side of the sense amp. One will get written to "0", and the
other to "1". After the wordline is shut off, open the back-door
gate, shorting the two dummy cells together, giving a 1/2 level.

There's more to the timing than that, but that's the basics. You
can also shove a hard-generated voltage in through the back-door
gate.

It's worth mentioning that sensing is moving away from Vdd/2,
because of sense amp stall. The operating voltages are getting so
low in modern technologies that it's getting tough to set a sense
amp. Moving to rail sensing gives the sense amp more drive, getting
away from stall conditions and giving faster performance. The
power goes up, but at the same time performance requirements are
driving toward shorter bitlines, and that can bring the power back
down.
Post by Adnan Aziz
- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)
Every DRAM write is really a read-modify-write. In one cycle you will
typically sense a thousand or several thousands of cells, but will
only write 4 or 8 or 16 in that chip. The rest of those cells have to
retain their old data. So you typically latch the write data while
you start the read. When the read is complete and the data stable,
the write data is gated into the desired cells.

You typically wait to do the write until after the read, so you don't
disturb the sensing process in the adjacent cells. There is some art
for getting around these limits, and writing faster.

Dale Pontius
Post by Adnan Aziz
cheers,
adnan
ps - pls reply to the newsgroups, or send me mail at adnan at
long gone)
-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
--

Continue reading on narkive:
Loading...