This was answered before. You might try searching the archive. If I recall
correctly, you can just comment out the call to initThreadPOitners
----- Original Message -----
From: "Thomas Yeh" <tomyeh@xxxxxxxxxxx>
To: <gems-users@xxxxxxxxxxx>
Sent: Thursday, October 05, 2006 3:57 AM
Subject: [Gems-users] error running 32 processors
I've setup a checkpoint file using simics 3.0 which contains 32
processors. Using the same gen-script to create the jobs for different
number of cores, I can run all scripts under 32 processors.
When running with 32 processors, I see this error:
### Executing "opal0.setparam CONFIG_IREG_PHYSICAL 224"
### Executing "opal0.setparam CONFIG_FPREG_PHYSICAL 192"
### Executing "opal0.setparam CONFIG_CCREG_PHYSICAL 69"
### Executing "opal0.init
"/home/tomyeh/gems-1.3-simics-3/opal/config/intelcore.defaults""
simics-common: system/system.C:1685: void system_t::initThreadPointers():
Assertion `0' failed.
...
... [pstate_t: warnings]
...
system_t::initThreadPointers: unrecognized system configuration.
:: halting...
Abort (SIGABRT) in main thread
The simulation state has been corrupted. Simulation cannot continue.
Please restart Simics.
Command aborted
Does this have antyhing to do with the fact that I had to use a new
include file which I got off of the simics forum
(serengeti-6800-cluster-system.include instead of the default
serengeti-6800-system.include)? Any help would be appreciated.
Here are the contents of that *cluster*include file:
if not $hostid {$hostid = 0x80804a6c}
if not $freq_mhz {$freq_mhz = 75}
if not $mac_address {$mac_address = "10:10:10:10:10:24"}
if not $disk_size {$disk_size = 2128486400}
if not $rtc_time {$rtc_time = "2002-06-02 17:00:00 UTC"}
if not $num_cpus {$num_cpus = 1}
if not $megs_per_cpu {$megs_per_cpu = 256}
if not $cpu_class {$cpu_class = "ultrasparc-iii-plus"}
if not $clustered {$clustered = "no"}
add-directory "%simics%/targets/serengeti/images/"
import-pci-components
import-std-components
import-sun-components
import-serengeti-components
if $cpu_class == "ultrasparc-iii-plus" {
$create_function = "create-serengeti-us-iii-plus-cpu-board"
} else {
$create_function = "create-serengeti-us-iii-cpu-board"
}
if $clustered == "yes" {
$create_chassis_function = "create-serengeti-cluster-chassis"
} else {
$create_chassis_function = "create-serengeti-6800-chassis"
}
$system = ($create_chassis_function hostid = $hostid
mac_address = $mac_address
rtc_time = $rtc_time)
$board = 0
$cpus_left = $num_cpus
while $cpus_left > 0 {
$cpus = (min 4 $cpus_left)
$cpubrd[$board] = ($create_function num_cpus = $cpus
cpu_frequency = $freq_mhz
memory_megs = ($megs_per_cpu *
$cpus))
if $clustered == "yes" {
$megs_per_cpu = 0
}
rtc_time = $rtc_time)
$board = 0
$cpus_left = $num_cpus
while $cpus_left > 0 {
$cpus = (min 4 $cpus_left)
$cpubrd[$board] = ($create_function num_cpus = $cpus
cpu_frequency = $freq_mhz
memory_megs = ($megs_per_cpu *
$cpus))
if $clustered == "yes" {
$megs_per_cpu = 0
}
$system.connect ("cpu-slot" + $board) $cpubrd[$board]
if ($board == 5 or $board == 15 or $board == 25 or $board == 35
or $board == 45 or $board == 55 or $board == 65 or $board == 75
or $board == 85 or $board == 95 or $board == 105) {
$board += 5
} else {
$board += 1
}
$cpus_left -= 4
}
unset cpus
$pciboard = (create-serengeti-pci8-board)
$pci_hme = (create-sun-pci-hme mac_address = $mac_address)
$pci_glm = (create-pci-sym53c875)
$scsi_bus = (create-std-scsi-bus)
$scsi_disk = (create-std-scsi-disk scsi_id = 0 size = $disk_size)
$scsi_cdrom = (create-std-scsi-cdrom scsi_id = 6)
$console = (create-std-text-console)
$system.connect io-slot6 $pciboard
$pciboard.connect pci-slot0 $pci_hme
$pciboard.connect pci-slot5 $pci_glm
$scsi_bus.connect $pci_glm
$scsi_bus.connect $scsi_disk
$scsi_bus.connect $scsi_cdrom
$system.connect $console
$machine_defined = 1
_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding
"site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
|