Thursday, August 22, 2013

Difference between uvm_component and uvm_object

What is the difference between uvm_component and uvm_object?
OR
We already have uvm_object, why do we need uvm_component which is actually derived class of uvm_object?


uvm_component is a static entity and always tied(bind) to a given hardware and/or a TLM interface
uvm_object is a dynamic entity and is not tied to any hardware/TLM interface

uvm_component like uvm_driver is always connected to a particular DUT interface because throughout the simulation its job is fixed i.e. to drive the designated signals into DUT

uvm_object like uvm_transaction is not connected to any particular DUT interface and its fields can take any random value based on randomization constraints.

Though uvm_component is derived from uvm_object, uvm_component has got these additional interfaces
* Hierarchy provides methods for searching and traversing the component hierarchy.
* Phasing defines a phased test flow that all components follow, with a group of standard phase methods and an API for custom phases and multiple independent phasing domains to mirror DUT behaviour e.g. power
* Configuration provides methods for configuring component topology and other parameters ahead of and during component construction.
* Reporting provides a convenient interface to the uvm_report_handler. All messages, warnings, and errors are processed through this interface.
* Transaction recording provides methods for recording the transactions produced or consumed by the component to a transaction database (vendor specific).
* Factory provides a convenient interface to the uvm_factory. The factory is used to create new components and other objects based on type-wide and instance-specific configuration



Thanks!


DUT - Design Under Test
TLM - Transaction Level Modelling
API - Application Programming Intefaces

Sample STMicroelectronics interview questions





Sample HR question which I need to prepare


1. Tell me about yourself?


2. Why do you want to work for STMicroelectronics?


3. Do you know anyone who works for STMicroelectronics?


4. Why should STMicroelectronics hire you?


5. What can you do for this company?


6. What can you do for STMicroelectronics that other candidates can’t?


7. How would your past experience translate into success in this job?


8. Explain how you would be an asset to STMicroelectronics?


9. What do you know about STMicroelectronics?


10. Please tell me some products/services of STMicroelectronics in the market? What are likes/dislikes of them?


11. If you worked for STMicroelectronics, what are you doing?


12. Please tell me some products/services that are competitors of STMicroelectronics’s in the market? And what are differences?


13. Why have you applied for this particular job for this field?


14. What do you like about your present job for this field?


15. What have you learned from your mistakes?


16. What do you dislike about your present job for this field?


17. What are the most difficult decisions to make?


18. How would you describe your work style?


19. Are you overqualified for this job?


20. Why do you want to leave your current employer?


Wednesday, June 19, 2013

Semaphore

Semaphore is used for lock/unlock of a common resource. If resource is in unlock state the resource can be used, if it is in lock state, then it can not be used.

Lets assume that in a Verification env, we have single ethernet port/driver and there are 2 packet generators using this one ethernet port, one generating normal frames and other generating flow control (pause) frames.

So each of this two packet generators can use semaphone to if ethenet driver is free (unlock) state, before it can transmit the frame. So if the driver is free, then it locks the drivers and transmits the frame, once tramitted, it unlocks the drivers.



When driver is in lock state, it will wait till driver is in unlock state.
Semaphore in SystemVerilog supports following methods for above operations.


Semaphore allocation : new()
Using semaphore keys : get()
Returing semaphore keys : put()
Try to obtain one or more keys without blocking: try_get()

new()
Semaphores are created with the new() method.
function new(int keyCount = 0 );
Where
keycount : The keyCount specifies the number of keys initially allocated to the semaphore bucket.
The new() function returns the semaphore handle or, if the semaphore cannot be created, null.


put()
The semaphore put() method is used to return keys to a semaphore.
task put(int keyCount = 1);
Where
The keyCount specifies the number of keys being returned to the semaphore. The default is 1.

get()
The semaphore get() method is used to procure a specified number of keys from a semaphore.
task get(int keyCount = 1);
Where
The keyCount specifies the required number of keys to obtain from the semaphore. The default is 1.
If the specified number of keys is not available, the process blocks until the keys become available.
The semaphore waiting queue is first-in first-out (FIFO).

try_get()
The semaphore try_get() method is used to procure a specified number of keys from a semaphore, but without blocking.
function int try_get(int keyCount = 1);
Where
The keyCount specifies the required number of keys to obtain from the semaphore. The default is 1.
If the specified number of keys is available, the method returns a positive integer and execution continues.
If the specified number of keys is not available, the method returns 0.


Example : Semaphore
1 program semaphore_ex; 
3 semaphore semBus = new(1); 
5 initial begin 
6 fork 
7 agent("AGENT 0",5); 
8 agent("AGENT 1",20); 
9 join 
10 end 
11 
12 task automatic agent(string name, integer nwait); 
13 integer i = 0; 
14 for (i = 0 ; i < 4; i ++ ) begin 
15 semBus.get(1); 
16 $display("[%0d] Lock semBus for %s", $time,name); 
17 #(nwait); 
18 $display("[%0d] Release semBus for %s", $time,name); 
19 semBus.put(1); 
20 #(nwait); 
21 end 
22 endtask 
23 
24 endprogram

-----------------------------------------------------------------------------------------------------------------------
Another Explanation of same thing :


Inter Process Communication: Semaphores

A semaphore allows you to control access to a resource. A semaphore is an equivalent of a key (or a set of keys) when a process tries to access a shared resource.

A semaphore is first associated with a resource that needs to be protected. Whenever a process wants to access this resource, it seeks a key from the semaphore. Depending on the availability of a key, a key is either allotted to the current process or not. If a key is allotted to a process when this process is done with using the resource, it will give the key back to the semaphore and that key may be allotted to other processes who are waiting for a key.

Syntactically, the semaphore is a built-in class that allows only certain pre-defined operations on the keys.

A semaphore declaration is shown below:


Built-in methods in Semaphore

There are 4 predefined methods inside a semaphore type class.
new( ): Create a semaphore with specified number of keys.
get( ): Obtain one or more keys from a sempahore and block until keys are available.
try_get( ): Obtain one or more kets from a semaphore without blocking.
put( ): Return one or more keys to a semaphore

new( )
Just as any other class, a semaphore needs a constructor.

The prototype declaration for new( ) is shown below:

The constructor new( ) has an integer argument key_count that can be used for creating a desired number of keys.
The default value of key_count is 0.
Upon success, the new( ) function returns the semaphore handle, otherwise, it returns null.

get( )
The get( ) task is used for obtaining one or multiple keys for a semaphore.

The prototype declaration for get( ) is shown below:

The number of keys to procure is passed as the argument to get( ).
The default value of this argument is 1.
If a process asks for certain number of kets and they are available, the call to get( ) returns and the execution continues.
If the required number of keys are not available, the call to get( ) blocks subsequent statements and waits for additional keys to be available. That is why get( ) is a task, not a function, and hence, can consume time.
All calls to get( ) are queued in a FIFO and keys are delivered in a first-come-first-serve basis.


try_get( )
If you do not want to block while trying to get keys for a semaphore, try_get( ) is your solution. Unlike get( ), try_get( ) is a function that checks for key availability and procures them if they are available (and returns 1). But, if they are not, try_get( ) does not block and returns 0.

The prototype for try_get( ) is shown below:


put( )
Now, suppose that a process is done with using a resource. According to the good code of conduct in the semaphore land, that process must return the keys (so that another process can use them). This is done by the put( ) task, where the number of returned kets are passes as an argument.

The prototype for put( ) is shown below:


#copied from internet : another resource

Friday, May 24, 2013

Mailbox


A mailbox is a mechanism to exchange messages between processes. Data can be sent to a mailbox by one process and retrieved by another. Mailbox can be used a FIFO if required. Data can be any valid systemVerilog data types, including class data types.
  
space.gif
SystemVerilog provides following methods for working with mailbox.
  
space.gif
  • Mailbox allocation : new()
  • Put data : put()
  • Try to place a message in a mailbox without blocking: try_put()
  • Get data : get() or peek()
  • Try to retrieve a message from a mailbox without blocking: try_get() or try_peek()
  • Retrieve the number of messages in the mailbox: num()
  
space.gif
Nonparameterized mailboxes are typeless, that is, a single mailbox can send and receive different types of data. Thus, in addition to the data being sent (i.e., the message queue), a mailbox implementation must maintain the message data type placed by put(). This is required in order to enable the run-time type checking.
  
space.gif
 ../images/main/bullet_star_pink.gifnew()
Mailboxes are created with the new() method.
  
space.gif
function new(int bound = 0);
  
space.gif
  • The new() function returns the mailbox handle or, if the mailbox cannot be created, null.
  • If the bound argument is 0, then the mailbox is unbounded (the default) and a put() operation shall never block.
  
space.gif
 ../images/main/bullet_star_pink.gifnum()
The number of messages in a mailbox can be obtained via the num() method.
  
space.gif
function int num();
  
space.gif
  • The num() method returns the number of messages currently in the mailbox.
  • The returned value should be used with care because it is valid only until the next get() or put() is executed on the mailbox.
  
space.gif
 ../images/main/bullet_star_pink.gifput()
The put() method places a message in a mailbox.
  
space.gif
task put( singular message);
  
space.gif
  • The message is any singular expression, including object handles.
  • The put() method stores a message in the mailbox in strict FIFO order.
  • If the mailbox was created with a bounded queue, the process shall be suspended until there is enough space in the queue.
  
space.gif
 ../images/main/bullet_star_pink.giftry_put()
The try_put() method attempts to place a message in a mailbox.
  
space.gif
function int try_put( singular message);
  
space.gif
  • The try_put() method stores a message in the mailbox in strict FIFO order.
  • If the mailbox is full, the method returns 0.
  
space.gif
  
space.gif
 ../images/main/bullet_star_pink.gifget()
The get() method retrieves a message from a mailbox.
  
space.gif
task get( ref singular message );
  
space.gif
  • The get() method removes one message from queue.
  • If the mailbox is empty, then the current process blocks until a message is placed in the mailbox.
  • If the type of the message variable is not equivalent to the type of the message in the mailbox, a run-time error is generated.
  
space.gif
 ../images/main/bullet_star_pink.giftry_get()
The try_get() method attempts to retrieves a message from a mailbox without blocking.
  
space.gif
function int try_get( ref singular message );
  
space.gif
  • The try_get() method tries to retrieve one message from the mailbox.
  • If the mailbox is empty, then the method returns 0.
  
space.gif
 ../images/main/bullet_star_pink.gifpeek()
The peek() method copies a message from a mailbox without removing the message from the queue.
  
space.gif
task peek( ref singular message );
  
space.gif
  • The peek() method copies one message from the mailbox without removing the message from the mailbox queue.
  • If the mailbox is empty, then the current process blocks until a message is placed in the mailbox.
  
space.gif
 ../images/main/bullet_star_pink.giftry_peek()
The try_peek() method attempts to copy a message from a mailbox without blocking.
  
space.gif
function int try_peek( ref singular message );
  
space.gif
  • The try_peek() method tries to copy one message from the mailbox without removing the message from the mailbox queue.
  • If the mailbox is empty, then the method returns 0.
  
space.gif
 ../images/main/bullet_star_pink.gifExample : Mailbox
  
space.gif

  1 program mailbox_ex;
  2   mailbox checker_data  = new();
  3 
  4   initial begin
  5     fork
  6       input_monitor();
  7       checker();
  8     join_any
  9      #1000 ;
 10   end
 11 
 12   task input_monitor();
 13     begin
 14       integer i = 0;
 15       // This can be any valid data type
 16       bit [7:0] data = 0; 
 17       for(i = 0; i < 4; i ++) begin
 18         #(3);
 19         data = $random();
 20         $display("[%0d] Putting data : %x into mailbox", $time,data);
 21         checker_data.put(data);    
 22       end
 23     end
 24   endtask
 25   
 26   task checker();
 27     begin
 28       integer i = 0;
 29       // This can be any valid data type
 30       bit [7:0] data = 0; 
 31       while (1) begin
 32         #(1);
 33         if (checker_data.num() > 0) begin
 34           checker_data.get(data);
 35           $display("[%0d] Got data : %x from mailbox", $time,data);
 36         end else begin
 37           #(7);
 38         end
 39       end
 40     end
 41   endtask
 42 
 43 endprogram

Wednesday, May 22, 2013

Race Condition

RACE CONDITION 

1.Verilog is easy to learn because its gives quick results. 
2. Although many users are telling that their work is free from race condition.But the fact is race condition is easy to create, to understand, to document but difficult to find. 
Here we will discuss regarding events which creates the race condition & solution for that. 


What Is Race Condition? 
When two expressions are scheduled to execute at same time, and if the order of the execution is not determined, then race condition occurs. 


EXAMPLE 
module race(); 
wire p; 
reg q; 
assign p = q; 

initial begin 
= 1; 
#1 q = 0; 
$display(p); 
end 
endmodule 



The simulator is correct in displaying either a 1 or a 0. The assignment of 0 to q enables an update event for p. The simulator may either continue or execute the $display system task or execute the update for p, followed by the $display task. 
Then guess what can the value of p ? 
Simulate the above code in your simulator. Then simulate the following code . Statement "assign p = q;" is changed to end of the module. 



EXAMPLE 
module race(); 
wire p; 
reg q; 

assign p = q; 

initial begin 
= 1; 
#1 q = 0; 
$display(p); 
end 
endmodule 



Analyze the effect if I change the order of the assign statement. 



Why Race Condition? 



To describe the behavior of electronics hardware at varying levels of abstraction, Verilog HDL has to be a parallel programming language and Verilog simulator and language itself are standard of IEEE, even though there are some nondeterministic events which is not mentioned in IEEE LRM and left it to the simulator algorithm, which causes the race condition. So it is impossible to avoid the race conditions from the language but we can avoid from coding styles. 

Look at following code. Is there any race condition? 



EXAMPLE: 
initial 
begin 
in = 1; 
out <= in; 
end 



Now if you swap these two lines: 


EXAMPLE 
initial 
begin 
out <= in; 
in = 1; 
end 



Think, is there any race condition created? 
Here first statement will schedule a non-blocking update for "out" to whatever "in" was set to previously, and then "in" will be set to 1 by the blocking assignment. Any statement whether it is blocking or nonblocking statements in a sequential block (i.e. begin-end block) are guaranteed to execute in the order they appear. So there is no race condition in the above code also. Since it is easy to make the "ordering mistake", one of Verilog coding guidelines is: "Do not mix blocking and nonblocking assignments in the same always block". This creates unnecessary doubt of race condition. 


When Race Is Visible? 



Sometimes unexpected output gives clue to search for race. Even if race condition is existing in code, and if the output is correct, then one may not realize that there exists race condition in their code. This type of hidden race conditions may come out during the following situation. 

When different simulators are used to run the same code. 
Some times when the new release of the simulator is used. 
Adding more code to previous code might pop out the previously hidden race. 
If the order of the files is changed. 
When using some tool specific options. 
If the order of the concurrent blocks or concurrent statements is changed.(One example is already discussed in the previous topics) 

Some simulators have special options which reports where exactly the race condition is exists. Linting tools can also catch race condition. 



How To Prevent Race Condition? 



There are many details which is unspecified between simulators. The problem will be realized when you are using different simulators. If you are limited to design guidelines then there is less chance for race condition but if you are using Verilog with all features for Testbench, then it is impossible to avoid. Moreover the language which you are using is parallel but the processor is sequential. So you cant prevent race condition. 



Types Of Race Condition 



Here we will see race condition closely. 
Types of race condition 



Write-Write Race: 



it occurs when same register is written in both the blocks. 


EXAMPLE: 
always @(posedge clk) 
= 1; 
always @(posedge clk) 
= 5; 



Here you are seeing that one block is updating value of a while another also. Now which always block should go first. This is nondeterministic in IEEE standard and left that work to the simulator algorithm. 



Read-Write Race: 



it occurs when same register is read in one block and writes in another. 


EXAMPLE: 
always @(posedge clk) 
= 1; 
always @(posedge clk) 
= a; 



Here you are seeing that in one always block value is assign to a while simultaneously its value is assign to b means a is writing and read parallel. This type of race condition can easily solved by using nonblocking assignment. 



EXAMPLE 
always @(posedge clk) 
<= 1; 
always @(posedge clk) 
<= a; 

More Race Example: 



1) Function calls 


EXAMPLE: 
function incri(); 
begin 
pkt_num = pkt_num + 1; 
end 
endfunction 

always @(...) 
sent_pkt_num = incri(); 

always @(...) 
sent_pkt_num_onemore = incri(); 



2) Fork join 


EXAMPLE: 
fork 
=0; 
= a; 
join 



3) $random 


EXAMPLE: 
always @(...) 
$display("first Random number is %d",$random()); 
always @(...) 
$display("second Random number is %d",$random()); 



4) Clock race 


EXAMPLE 
initial 
clk = 0; 
always 
clk = #5 ~clk; 



If your clock generator is always showing "X" then there is a race condition. There is one more point to be noted in above example. Initial and always starts executes at time zero. 

5) Declaration and initial 


EXAMPLE: 
reg a = 0; 
initial 
= 1; 



6)Testbench DUT race condition. 

In test bench , if driving is done at posedge and reading in DUT is done at the same time , then there is race. To avoid this, write from the Testbench at negedge or before the posedge of clock. This makes sure that the DUT samples the signal without any race. 


EXAMPLE: 
module DUT(); 
input d; 
input clock; 
output q; 

always @(posedge clock) 
= d; 

endmodule 

module testbench(); 

DUT dut_i(d,clk,q); 

initial 
begin 
@(posedge clk) 
= 1; 
@(posedge clock) 
= 0; 
end 
endmodule 

The above example has write read race condition. 

Event Terminology: 



Every change in value of a net or variable in the circuit being simulated, as well as the named event, is considered an update event. Processes are sensitive to update events. When an update event is executed, all the processes that are sensitive to that event are evaluated in an arbitrary order. The evaluation of a process is also an event, known as an evaluation event. 

In addition to events, another key aspect of a simulator is time. The term simulation time is used to refer to the time value maintained by the simulator to model the actual time it would take for the circuit being simulated. The term time is used interchangeably with simulation time in this section. Events can occur at different times. In order to keep track of the events and to make sure they are processed in the correct order, the events are kept on an event queue, ordered by simulation time. Putting an event on the queue is called scheduling an event. 



The Stratified Event Queue 



The Verilog event queue is logically segmented into five different regions. Events are added to any of the five regions but are only removed from the active region. 

1) Events that occur at the current simulation time and can be processed in any order. These are the 
active events. 
1.1 evaluation of blocking assignment. 
1.2 evaluation of RHS of nonblocking assignment. 
1.3 evaluation of continuous assignment. 
1.4 evaluation of primitives I/Os 
1.5 evaluation of $display or $write 

2) Events that occur at the current simulation time, but that shall be processed after all the active events are processed. These are the inactive events. 
#0 delay statement. 

3) Events that have been evaluated during some previous simulation time, but that shall be assigned at this simulation time after all the active and inactive events are processed. These are the nonblocking assign update events. 

4) Events that shall be processed after all the active, inactive, and non blocking assign update events are processed. These are the monitor events. 
$strobe and $monitor 

5) Events that occur at some future simulation time. These are the future events. Future events are divided into future inactive events, and future non blocking assignment update events. 

Example : PLI tasks 

The processing of all the active events is called a simulation cycle. 


Determinism 



This standard guarantees a certain scheduling order. 

1) Statements within a begin-end block shall be executed in the order in which they appear in that begin-end block. Execution of statements in a particular begin-end block can be suspended in favor of other processes in the model; however, in no case shall the statements in a begin-end block be executed in any order other than that in which they appear in the source. 

2) Non blocking assignments shall be performed in the order the statements were executed. 

Consider the following example: 


initial begin 
<= 0; 
<= 1; 
end 


When this block is executed, there will be two events added to the non blocking assign update queue. The previous rule requires that they be entered on the queue in source order; this rule requires that they be taken from the queue and performed in source order as well. Hence, at the end of time step 1, the variable a will be assigned 0 and then 1. 



Nondeterminism 




One source of nondeterminism is the fact that active events can be taken off the queue and processed in any order. Another source of nondeterminism is that statements without time-control constructs in behavioral blocks do not have to be executed as one event. Time control statements are the # expression and @ expression constructs. At any time while evaluating a behavioral statement, the simulator may suspend execution and place the partially completed event as a pending active event on the event queue. The effect of this is to allow the interleaving of process execution. Note that the order of interleaved execution is nondeterministic and not under control of the user. 



Guideline To Avoid Race Condition 



(A). Do not mix blocking and nonblocking statements in same block.
(B). Do not read and write using blocking statement on same variable.( avoids read write race) 
(C). Do not initialize at time zero. 
(D). Do not assign a variable in more than one block.( avoids write-write race) 
(E). Use assign statement for inout types of ports & do not mix blocking and nonblocking styles of declaration in same block. It is disallow variables assigned in a blocking assignment of a clocked always block being used outside that block and disallow cyclical references that don't go through a non-blocking assignment. It is require all non-blocking assignments to be in a clocked always block. 
(F). Use blocking statements for combinational design and nonblocking for sequential design. If you want gated outputs from the flops, you put them in continuous assignments or an always block with no clock. 



Avoid Race Between Testbench And Dut 



Race condition may occurs between DUT and testbench. Sometimes verification engineers are not allowed to see the DUT, Sometimes they don't even have DUT to verify. Consider the following example. Suppose a testbench is required to wait for a specific response from its DUT. Once it receives the response, at the same simulation time it needs to send a set of stimuli back to the DUT. 

Most Synchronous DUT works on the posedge of clock. If the Testbench is also taking the same reference, then we may unconditionally end in race condition. So it~Rs better to choose some other event than exactly posedge of cock. Signals are stable after the some delay of posedge of clock. Sampling race condition would be proper if it is done after some delay of posedge of clock. Driving race condition can be avoided if the signal is driven before the posedge of clock, so at posedge of clock ,the DUT samples the stable signal. So engineers prefer to sample and drive on negedge of clock, this is simple and easy to debug in waveform debugger also. RACE CONDITION 


Verilog is easy to learn because its gives quick results. Although many users are telling that their work is free from race condition.But the fact is race condition is easy to create, to understand, to document but difficult to find. Here we will discuss regarding events which creates the race condition & solution for that. 

What Is Race Condition? 



When two expressions are scheduled to execute at same time, and if the order of the execution is not determined, then race condition occurs. 


EXAMPLE 
module race(); 
wire p; 
reg q; 
assign p = q; 

initial begin 
= 1; 
#1 q = 0; 
$display(p); 
end 
endmodule 



The simulator is correct in displaying either a 1 or a 0. The assignment of 0 to q enables an update event for p. The simulator may either continue or execute the $display system task or execute the update for p, followed by the $display task. 
Then guess what can the value of p ? 
Simulate the above code in your simulator. Then simulate the following code . Statement "assign p = q;" is changed to end of the module. 



EXAMPLE 
module race(); 
wire p; 
reg q; 

assign p = q; 

initial begin 
= 1; 
#1 q = 0; 
$display(p); 
end 
endmodule 



Analyze the effect if I change the order of the assign statement. 



Why Race Condition? 



To describe the behavior of electronics hardware at varying levels of abstraction, Verilog HDL has to be a parallel programming language and Verilog simulator and language itself are standard of IEEE, even though there are some nondeterministic events which is not mentioned in IEEE LRM and left it to the simulator algorithm, which causes the race condition. So it is impossible to avoid the race conditions from the language but we can avoid from coding styles. 

Look at following code. Is there any race condition? 



EXAMPLE: 
initial 
begin 
in = 1; 
out <= in; 
end 



Now if you swap these two lines: 


EXAMPLE 
initial 
begin 
out <= in; 
in = 1; 
end 



Think, is there any race condition created? 
Here first statement will schedule a non-blocking update for "out" to whatever "in" was set to previously, and then "in" will be set to 1 by the blocking assignment. Any statement whether it is blocking or nonblocking statements in a sequential block (i.e. begin-end block) are guaranteed to execute in the order they appear. So there is no race condition in the above code also. Since it is easy to make the "ordering mistake", one of Verilog coding guidelines is: "Do not mix blocking and nonblocking assignments in the same always block". This creates unnecessary doubt of race condition. 


When Race Is Visible? 



Sometimes unexpected output gives clue to search for race. Even if race condition is existing in code, and if the output is correct, then one may not realize that there exists race condition in their code. This type of hidden race conditions may come out during the following situation. 

When different simulators are used to run the same code. 
Some times when the new release of the simulator is used. 
Adding more code to previous code might pop out the previously hidden race. 
If the order of the files is changed. 
When using some tool specific options. 
If the order of the concurrent blocks or concurrent statements is changed.(One example is already discussed in the previous topics) 

Some simulators have special options which reports where exactly the race condition is exists. Linting tools can also catch race condition. 



How To Prevent Race Condition? 



There are many details which is unspecified between simulators. The problem will be realized when you are using different simulators. If you are limited to design guidelines then there is less chance for race condition but if you are using Verilog with all features for Testbench, then it is impossible to avoid. Moreover the language which you are using is parallel but the processor is sequential. So you cant prevent race condition. 



Types Of Race Condition 



Here we will see race condition closely. 
Types of race condition 



Write-Write Race: 



it occurs when same register is written in both the blocks. 


EXAMPLE: 
always @(posedge clk) 
= 1; 
always @(posedge clk) 
= 5; 



Here you are seeing that one block is updating value of a while another also. Now which always block should go first. This is nondeterministic in IEEE standard and left that work to the simulator algorithm. 



Read-Write Race: 



it occurs when same register is read in one block and writes in another. 


EXAMPLE: 
always @(posedge clk) 
= 1; 
always @(posedge clk) 
= a; 



Here you are seeing that in one always block value is assign to a while simultaneously its value is assign to b means a is writing and read parallel. This type of race condition can easily solved by using nonblocking assignment. 



EXAMPLE 
always @(posedge clk) 
<= 1; 
always @(posedge clk) 
<= a; 

More Race Example: 



1) Function calls 


EXAMPLE: 
function incri(); 
begin 
pkt_num = pkt_num + 1; 
end 
endfunction 

always @(...) 
sent_pkt_num = incri(); 

always @(...) 
sent_pkt_num_onemore = incri(); 



2) Fork join 


EXAMPLE: 
fork 
=0; 
= a; 
join 



3) $random 


EXAMPLE: 
always @(...) 
$display("first Random number is %d",$random()); 
always @(...) 
$display("second Random number is %d",$random()); 



4) Clock race 


EXAMPLE 
initial 
clk = 0; 
always 
clk = #5 ~clk; 



If your clock generator is always showing "X" then there is a race condition. There is one more point to be noted in above example. Initial and always starts executes at time zero. 

5) Declaration and initial 


EXAMPLE: 
reg a = 0; 
initial 
= 1; 



6)Testbench DUT race condition. 

In test bench , if driving is done at posedge and reading in DUT is done at the same time , then there is race. To avoid this, write from the Testbench at negedge or before the posedge of clock. This makes sure that the DUT samples the signal without any race. 


EXAMPLE: 
module DUT(); 
input d; 
input clock; 
output q; 

always @(posedge clock) 
= d; 

endmodule 

module testbench(); 

DUT dut_i(d,clk,q); 

initial 
begin 
@(posedge clk) 
= 1; 
@(posedge clock) 
= 0; 
end 
endmodule 

The above example has write read race condition. 

Event Terminology: 



Every change in value of a net or variable in the circuit being simulated, as well as the named event, is considered an update event. Processes are sensitive to update events. When an update event is executed, all the processes that are sensitive to that event are evaluated in an arbitrary order. The evaluation of a process is also an event, known as an evaluation event. 

In addition to events, another key aspect of a simulator is time. The term simulation time is used to refer to the time value maintained by the simulator to model the actual time it would take for the circuit being simulated. The term time is used interchangeably with simulation time in this section. Events can occur at different times. In order to keep track of the events and to make sure they are processed in the correct order, the events are kept on an event queue, ordered by simulation time. Putting an event on the queue is called scheduling an event. 



The Stratified Event Queue 



The Verilog event queue is logically segmented into five different regions. Events are added to any of the five regions but are only removed from the active region. 

1) Events that occur at the current simulation time and can be processed in any order. These are the 
active events. 
1.1 evaluation of blocking assignment. 
1.2 evaluation of RHS of nonblocking assignment. 
1.3 evaluation of continuous assignment. 
1.4 evaluation of primitives I/Os 
1.5 evaluation of $display or $write 

2) Events that occur at the current simulation time, but that shall be processed after all the active events are processed. These are the inactive events. 
#0 delay statement. 

3) Events that have been evaluated during some previous simulation time, but that shall be assigned at this simulation time after all the active and inactive events are processed. These are the nonblocking assign update events. 

4) Events that shall be processed after all the active, inactive, and non blocking assign update events are processed. These are the monitor events. 
$strobe and $monitor 

5) Events that occur at some future simulation time. These are the future events. Future events are divided into future inactive events, and future non blocking assignment update events. 

Example : PLI tasks 

The processing of all the active events is called a simulation cycle. 


Determinism 



This standard guarantees a certain scheduling order. 

1) Statements within a begin-end block shall be executed in the order in which they appear in that begin-end block. Execution of statements in a particular begin-end block can be suspended in favor of other processes in the model; however, in no case shall the statements in a begin-end block be executed in any order other than that in which they appear in the source. 

2) Non blocking assignments shall be performed in the order the statements were executed. 

Consider the following example: 


initial begin 
<= 0; 
<= 1; 
end 


When this block is executed, there will be two events added to the non blocking assign update queue. The previous rule requires that they be entered on the queue in source order; this rule requires that they be taken from the queue and performed in source order as well. Hence, at the end of time step 1, the variable a will be assigned 0 and then 1. 



Nondeterminism 




One source of nondeterminism is the fact that active events can be taken off the queue and processed in any order. Another source of nondeterminism is that statements without time-control constructs in behavioral blocks do not have to be executed as one event. Time control statements are the # expression and @ expression constructs. At any time while evaluating a behavioral statement, the simulator may suspend execution and place the partially completed event as a pending active event on the event queue. The effect of this is to allow the interleaving of process execution. Note that the order of interleaved execution is nondeterministic and not under control of the user. 



Guideline To Avoid Race Condition 



(A). Do not mix blocking and nonblocking statements in same block.
(B). Do not read and write using blocking statement on same variable.( avoids read write race) 
(C). Do not initialize at time zero. 
(D). Do not assign a variable in more than one block.( avoids write-write race) 
(E). Use assign statement for inout types of ports & do not mix blocking and nonblocking styles of declaration in same block. It is disallow variables assigned in a blocking assignment of a clocked always block being used outside that block and disallow cyclical references that don't go through a non-blocking assignment. It is require all non-blocking assignments to be in a clocked always block. 
(F). Use blocking statements for combinational design and nonblocking for sequential design. If you want gated outputs from the flops, you put them in continuous assignments or an always block with no clock. 



Avoid Race Between Testbench And Dut 



Race condition may occurs between DUT and testbench. Sometimes verification engineers are not allowed to see the DUT, Sometimes they don't even have DUT to verify. Consider the following example. Suppose a testbench is required to wait for a specific response from its DUT. Once it receives the response, at the same simulation time it needs to send a set of stimuli back to the DUT. 

Most Synchronous DUT works on the posedge of clock. If the Testbench is also taking the same reference, then we may unconditionally end in race condition. So it~Rs better to choose some other event than exactly posedge of cock. Signals are stable after the some delay of posedge of clock. Sampling race condition would be proper if it is done after some delay of posedge of clock. Driving race condition can be avoided if the signal is driven before the posedge of clock, so at posedge of clock ,the DUT samples the stable signal. So engineers prefer to sample and drive on negedge of clock, this is simple and easy to debug in waveform debugger also. 

//content is copied from testbench.in and a bit edited .
//you can give your inputs as comments

Ethernet and more

Ethernet is a protocol under IEEE 802.33 standard User Datagram Protocol (UDP) UDP is a connectionless transport protocol. I...