[Disksim-users] Max Addressable LBA in SSD Trace File

Al-Dahlawi dahlawi at IEEE.ORG
Sat Jun 25 22:47:03 EDT 2011


Greeting All

Does Any one know how to Calculate the MAX Addressable LBA that can be used
in SSD Trace File given the PARV file below. ( 64 GB SSD)

My Understanding is that the MAX LBA is calculated as follows

2048 Blocks * 8 Planes * 64 Pages * 8 Elements = 8388608 Pages in the SSD

Abd Since each Page in SSD is 8 KB, Then it can hold 16 LBA of size 512
Bytes

So the MAX addressable LBA is 8388608 * 16 = 134217728

HOWEVER, When I use this LBA in my SSD Trace File, I get an ERROR

unexpected request location: devno 0, blkno 134217728, bcount 8

Assertion failed:
simtime     = 5297.450000
totalreqs     = 3
disksim: disksim_logorg.c:763: logorg_maprequest: Assertion `logorgno != -1'
failed.


Any Feed Back ?????

# ---------------------------------------------------------
disksim_global Global {
 Init Seed = 42,
 Real Seed = 42,
 # Statistic warm-up period = 0.0 seconds,
 Stat definition file = statdefs
}
# ---------------------------------------------------------
disksim_stats Stats {

iodriver stats = disksim_iodriver_stats {
 Print driver size stats = 1,
 Print driver locality stats = 0,
 Print driver blocking stats = 0,
 Print driver interference stats = 0,
 Print driver queue stats = 1,
 Print driver crit stats = 0,
 Print driver idle stats = 1,
 Print driver intarr stats = 1,
 Print driver streak stats = 1,
 Print driver stamp stats = 1,
 Print driver per-device stats = 1 },

bus stats = disksim_bus_stats {
 Print bus idle stats = 1,
 Print bus arbwait stats = 1 },

ctlr stats = disksim_ctlr_stats {
 Print controller cache stats = 1,
 Print controller size stats = 1,
 Print controller locality stats = 1,
 Print controller blocking stats = 1,
 Print controller interference stats = 1,
 Print controller queue stats = 1,
 Print controller crit stats = 1,
 Print controller idle stats = 1,
 Print controller intarr stats = 1,
 Print controller streak stats = 1,
 Print controller stamp stats = 1,
 Print controller per-device stats = 1 },

device stats = disksim_device_stats {
 Print device queue stats = 0,
 Print device crit stats = 0,
 Print device idle stats = 0,
 Print device intarr stats = 0,
 Print device size stats = 0,
 Print device seek stats = 1,
 Print device latency stats = 1,
 Print device xfer stats = 1,
 Print device acctime stats = 1,
 Print device interfere stats = 0,
 Print device buffer stats = 1 },

process flow stats = disksim_pf_stats {
 Print per-process stats =  1,
 Print per-CPU stats =  1,
 Print all interrupt stats =  1,
 Print sleep stats =  1
 }
} # end of stats block
# ---------------------------------------------------------
#disksim_iosim IS {
#     I/O Trace Time Scale = 1.0
#}  # end of iosim spec
# ---------------------------------------------------------
disksim_iodriver DRIVER0 {
type = 1,
Constant access time = 0.0,
Scheduler = disksim_ioqueue {
 Scheduling policy = 3,
 Cylinder mapping strategy = 1,
 Write initiation delay = 0.0,
 Read initiation delay = 0.0,
 Sequential stream scheme = 0,
 Maximum concat size = 128,
 Overlapping request scheme = 0,
 Sequential stream diff maximum = 0,
 Scheduling timeout scheme = 0,
 Timeout time/weight = 6,
 Timeout scheduling = 4,
 Scheduling priority scheme = 0,
 Priority scheduling = 4
}, # end of Scheduler
Use queueing in subsystem = 1
} # end of DRV0 spec
# ---------------------------------------------------------
disksim_bus BUSTOP {
type = 1,
Arbitration type = 1,
Arbitration time = 0.0,
Read block transfer time = 0.0,
Write block transfer time = 0.0,
Print stats =  1
} # end of BUSTOP spec
# ---------------------------------------------------------
disksim_bus BUSHBA {
type = 2,
Arbitration type = 1,
Arbitration time = 0.001,

# PCI-E, with 8 lanes with 8b/10b encoding gives 2.0 Gbps per
# lane and with 8 lanes we get about 2.0 GBps. So, bulk sector
# transfer time is about 0.238 us. SATA/300 can transfer data
# at 300 MBps, which amounts to about 1.6276 us per byte.

Read block transfer time = 0.0002384,
Write block transfer time = 0.0002384,
#Read block transfer time = 0.0016276,
#Write block transfer time = 0.0016276,

Print stats =  1
} # end of BUSHBA spec
# ---------------------------------------------------------
disksim_ctlr CTLR0 {
type = 1,
Scale for delays = 0.0,
Bulk sector transfer time = 0.0,
Maximum queue length = 100,
Print stats =  1
} # end of CTLR0 spec
# ---------------------------------------------------------
# don't change the order of the following parameters.
# we use Flash chip elements and Elements per gang to
# find number of gang -- we need this info before initializing
# the queue (disksim_ioqueue)

ssdmodel_ssd SSD {
     # vp - this is a percentage of total pages in the ssd
     Reserve pages percentage = 15,

     # vp - min percentage of free blocks needed. if the free
     # blocks drop below this, cleaning kicks in
     Minimum free blocks percentage = 5,

     # vp - a simple read-modify-erase-write policy = 1 (no longer
supported)
     # vp - osr write policy = 2
     Write policy = 2,

     # vp - random = 1 (not supp), greedy = 2, wear-aware = 3
     Cleaning policy = 2,

     # vp - number of planes in each flash package (element)
     Planes per package = 8,

     # vp - number of flash blocks in each plane
     Blocks per plane = 2048,

     # vp - how the blocks within an element are mapped on a plane
     # simple concatenation = 1, plane-pair stripping = 2 (not tested),
     # full stripping = 3
     Plane block mapping = 3,

     # vp - copy-back enabled (1) or not (0)
     Copy back = 1,

     # how many parallel units are there?
     # entire elem = 1, two dies = 2, four plane-pairs = 4
     Number of parallel units = 1,

     # vp - we use diff allocation logic: chip/plane
     # each gang = 0, each elem = 1, each plane = 2
     Allocation pool logic = 1,

     # elements are grouped into a gang
     Elements per gang = 1,

     # shared bus (1) or shared control (2) gang
     Gang share = 1,

     # when do we want to do the cleaning?
     Cleaning in background = 0,

     Command overhead =  0.00,
     Bus transaction latency =  0.0,

#    Assuming PCI-E, with 8 lanes with 8b/10b encoding.
#    This gives 2.0 Gbps per lane and with 8 lanes we get about
#    2.0 GBps. So, bulk sector transfer time is about 0.238 us.
#    Use the "Read block transfer time" and "Write block transfer time"
#    from disksim_bus above.
     Bulk sector transfer time =  0,

     Flash chip elements = 8,

     Page size = 8,

     Pages per block = 64,

     # vp - changing the no of blocks from 16184 to 16384
     Blocks per element = 16384,

     Element stride pages = 1,

     Never disconnect =  1,
     Print stats =  1,
     Max queue length =  20,
     Scheduler = disksim_ioqueue {
       Scheduling policy =  1,
       Cylinder mapping strategy =  0,
       Write initiation delay =  0,
       Read initiation delay =  0.0,
       Sequential stream scheme =  0,
       Maximum concat size =  0,
       Overlapping request scheme =  0,
       Sequential stream diff maximum =  0,
       Scheduling timeout scheme =  0,
       Timeout time/weight =  0,
       Timeout scheduling =  0,
       Scheduling priority scheme =  0,
       Priority scheduling =  1
     },
     Timing model = 1,

     # vp changing the Chip xfer latency from per sector to per byte
     Chip xfer latency = 0.000025,

     Page read latency = 0.025,
     Page write latency = 0.200,
     Block erase latency = 1.5
}  # end of SSD spec

# ---------------------------------------------------------
# HP_C3323A
source atlas10k.diskspecs
source ibm18es.diskspecs
source cheetah9LP.diskspecs
# ---------------------------------------------------------
# component instantiation
instantiate [ statfoo ] as Stats

instantiate [ ssd0x0 ]  as  SSD
instantiate [ bustop ]  as  BUSTOP
instantiate [ busHBA0 ] as  BUSHBA

instantiate [ driver0 ] as  DRIVER0
instantiate [ ctlr0 ]   as  CTLR0

# ---------------------------------------------------------
# system topology
topology disksim_iodriver driver0 [
                                   disksim_bus bustop [
                                                       disksim_ctlr ctlr0 [

disksim_bus busHBA0 [

ssdmodel_ssd ssd0x0 []

]
                                                                          ]
                                                      ]
                                  ]
# no syncsets
# ---------------------------------------------------------
disksim_logorg org0 {
   Addressing mode = Array,
   Distribution scheme = Striped,
   Redundancy scheme = Noredun,

   # vp - added more ssd elements
   devices = [ ssd0x0 ],

   Stripe unit  =  128,
   Synch writes for safety =  0,
   Number of copies =  2,
   Copy choice on read =  6,
   RMW vs. reconstruct =  0.5,
   Parity stripe unit =  128,
   Parity rotation type =  1,
   Time stamp interval =  0.000000,
   Time stamp start time =  60000.000000,
   Time stamp stop time =  10000000000.000000,
   Time stamp file name =  stamps
} # end of logorg org0 spec
#--------------------------------------------------------------



--------------------------------



-- 
Abdullah Al-Dahlawi
----
Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2010/06/100
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.andrew.cmu.edu/pipermail/disksim-users/attachments/20110625/d252b578/attachment.html>


More information about the Disksim-users mailing list