jsimIO Automatic

Two files containing functions (DataQuery and DataProcess) and one file to be run (DataModel).

Automatic data extraction

The file DataQuery contains 4 functions:
def process_plan():
return final_prod,task,task_type,successor,time,machine
def components():
return component,ass_task
def buffers():
return capacity
def machines():
return pred,succ
Each function takes in input a query and returns in output a set of arrays.
  • final_prod: The name of the final product
  • task: The name of the process steps
  • task_type: The type (Manufacturing, Handling, Assembly, Quality Control, Loading, Unloading) of the process steps
  • successor: The successive process step
  • time: The service time of the process steps
  • machine: The station where the process steps are executed.
  • component: The name of the components
  • ass_task: The name of the process steps on which the components are assembled.
  • capacity: The capacity of the buffers.
  • pred: The name of all the physical elements of the system\
  • succ: The downstream element of each physical element.
The elements of these arrays are returned in random order. To develop each function the steps below are followed: (The code below is related to the function process_plan)
  1. 1.
    Import the required libraries.
    from SPARQLWrapper import SPARQLWrapper, JSON
    import numpy
  2. 2.
    Write the query as a text file. The SPARQL language has 4 main components:
    • PREFIX clause to find the documents with the given prefixes.
    • FROM and FROM NAMED clauses specify the RDF dataset to be addressed.
    • SELECT clause identifies the variables to appear in the results.
    • WHERE clause provides the basic graph pattern to match against the data graph.
      PREFIX rdf: <>
      PREFIX rdfs: <>
      PREFIX owl: <>
      select distinct ?parttype ?pplan ?task ?taskclass ?succ ?timeDet
      FROM <> FROM NAMED <>
      FROM <> FROM NAMED <>
      FROM <> FROM NAMED <>
      FROM <> FROM NAMED <>
      # get process plans
      ?pplan rdf:type owl:NamedIndividual .
      ?pplan rdf:type/rdfs:subClassOf* ifc:IfcTaskType .
      ?parttype ifcext:hasAssignedObject|^ifcext:hasAssignmentTo ?pplan .
      ?parttype rdf:type/rdfs:subClassOf* factory:ArtifactType .
  3. 3.
    Import the text file of the query, convert it to a string, pass the string as SPARQL query, interrogate the endpoint and return the results.
    # open text file in read mode
    text_file = open("Query/ProcessPlan.txt", "r")
    # convert the file to a string
    data =
    # close file
    sparql = SPARQLWrapper("") #endpoint
    results = sparql.query().convert()
  4. 4.
    Convert the results to arrays.
    array1 = []
    array2 = []
    array3 = []
    for result in results["results"]["bindings"]:
    if 'resource' in result:
  5. 5.
    Convert the values from URI (Uniform Resource Identifier) format to short URIs.
    s1 = "#"
    final_prod = []
    p_plan = []
    task = []
    for i in range(0, len(array1)):
    final_prod.append((array1[i][array1[i].index(s1) + len(s1):]))
    p_plan.append((array2[i][array2[i].index(s1) + len(s1):]))
    task.append((array3[i][array3[i].index(s1) + len(s1):]))

Automatic data processing

The file DataProcess contains the function:
def data_process():
return order_machine,order_task_type,order_time,order_capacity,order_ass,part,part_type
The function takes in input the results from DataQuery and returns in output a set of arrays.
  • order_machine: The station where the process steps are executed.
  • order_task_type: The type (Manufacturing, Handling, Assembly, Quality Control, Loading, Unloading) of the process steps
  • order_time: The service time of the process steps
  • order_capacity: The capacity of the buffers
  • order_ass: The name of the assembly tasks
  • part: The name of the parts
  • part_type: The type (Final, Component) of the parts.
To develop the function the steps below are followed:
  1. 1.
    Import the required functions and the related input arrays.
    from Query.DataQuery import process_plan,components,buffers
  2. 2.
    Control that just one process plan and one final product exist and the correspondence between assembly tasks and components.
    for i in range(0,len(final_prod)):
    for j in range(0,len(final_prod)):
    if final_prod[i]!=final_prod[j]:
    raise ValueError('Error, there is more than one final product')
    for i in range(0,len(task)):
    if task_type[i]=='AssemblyTask':
    if task[i] not in ass_task:
    raise ValueError('Error, the assembly tasks do not correspond to the components')
  3. 3.
    Order the input arrays based on the precedence relations in the process plan and create the order_arrays.
    order_task = []
    for i in range (0,len(task)):
    if task[i] not in successor: #the task that is not successor of any other is the first task
    actual_task=task[i] #scalar indicating the last task to be added to the ordered sequence
    order_task.append(task[i]) #the first task is added to the ordered sequence
    if len(order_task)!=1:
    raise ValueError('Error, there is more than one first station')
    while i<len(task):
    if task[i]==actual_task and successor[i]!='0':
    order_task.append(successor[i]) #the successor of the actual task is added to the ordered sequence
    actual_task=successor[i] #the actual task is updated
  4. 4.
    Order the physical elements based on their connections and control that they allow the execution of the process plan in terms of sequence of process steps.
    current = -1
    for i in range(0, len(order_machine)):
    j = 0
    flag = 1
    while flag == 1 and j < len(order_elem):
    if order_machine[i] == order_elem[j]:
    if j < current:
    raise ValueError('Error, the physical connections do not correspond to the process connections')
    current = j
    flag = 0
    j = j + 1
    The above code is inserted as comment in the scripts because some of the physical connections resulting from the query Elements.txt are wrong with respect to the designed ones:
    • Element PP1.PPs_FloatingY instead of PP1
    • Elements PPH1 and PI1 are not successors of any other element
    • PP1 is downstream element of itself
    • RPP1 is downstream element of both T1 and RT1.
      If the code is run then it will return 'Error'.
  5. 5.
    Define the array of the parts.
    part = []
    part.append(final_prod[0]) #define the final product
    if len(ass_task)==len(component): #the components must be one more than the assembly tasks
    for i in range(0, len(order_component)):
    part_type = ['final'] #define the type of parts
    for k in range(1, len(part)):

Automatic model generation

The file DataModel has to be run by the user. NB: The current version of data_model run under these assumptions:
  • Only one final product is produced, so just one process plan exists. The service time of the stations depends just on the task and not on the served class.
  • The configuration of the system is a line without parallel flows.
  • Only the first component can be processed on its own before being assembled, indeed all the other components visit a join node as first object.
  • If one station performs n tasks, then n stations are created, each one in charge of one task only.
  • If n components are joined within the same task, then (n-1) joins are created in sequence. This set of joins is positioned before the station in charge of the assembly task.
    The final product visits all the joins and, at each one, a component is assembled on it. At last the station processes the final product.
  • Join objects are visited by parts just before the related assembly station because that allows to properly consider starvation.
  • The service and interarrival times are deterministic.
To develop the code the steps below are followed: (only the second step requires the user to insert manually input data)
  1. 1.
    Import the required libraries, functions and the related input arrays.
    from Query.DataProcess import data_process
    from jsimIO import *
  2. 2.
    Set the simulation options (optional, only if flag=1) and the interarrival time of the parts (mandatory).
    flag=1 # flag 0 if the simulation options are the ones of default, 1 if they are specified by the user
    if flag==1:
    options = Default.ModelOptions
    options["maxSamples"] = "100000000"
    options["maxEvents"] = "100000000"
    options["maxSimulated"] = "10000"
    options["maxTime"] = "100000000"
  3. 3.
    Verify if the process is a manufacturing or assembly one. The modelling of the two cases is carried out with two separated procedures.
    process_type=0 #0 if manufacturing process, 1 if assembly process
    while z<len(machine_type):
    if machine_type[z]=='AssemblyTask': #process_type = 1 if there is at least one assembly task
  4. 4.
    Create the Model instance, if flag=1 consider the options.
    if flag==1:
    model = Model("Modello_DataModel",options)
    model = Model("Modello_DataModel")
  5. 5.
    Model the system.
The modelling procedure consists in the automatic deployment of the procedure explained in manufacturing_process and assembly_process.

Manufacturing process procedure

5.1.1) Create the network nodes and set their capacity.
source = Source(model, "Source")
for i in range (0, (len(machine))):
globals()['M%i' % (i+1)] = Station(model, '"'+str(machine[i])+'"', buffer_size=capacity[i])
sink = Sink(model, "Sink") # 2
5.1.2) Define the customer class.
product = OpenClass(model, '"'+str(part[0])+'"', source, Dist.Determ(arrival_time[0]))
5.1.3) Set the service times.
for j in range(0,len(machine)):
exec('M' + str(j + 1)+'.set_service(product, Dist.Determ('+str(time[j])+'))')
5.1.4) Define the routings.
source.add_route(product, M1, 1)
k = 0
while k < (len(machine) - 1):
k = k + 1
exec('M' + str(k) + '.add_route(product,M' + str(k + 1) + ',1)')
exec('M' + str(k + 1) + '.add_route(product,sink,1)')

Assembly process procedure

5.2.1) Create an array of component1,...,component(len(products)).
for i in range (0,len(part)-1):
component.append("component_%i" % k)
5.2.2) Define Source and Sink.
source = Source(model, "Source")
sink = Sink(model, "Sink")
5.2.3) Define the final product class and the related fork, the component classes and the related joints.
for i in range (0,len(part)):
if part_type[i]=='final': #the first product should be always the final one by construction
final = OpenClass(model, '"'+str(part[i])+'"', source, Dist.Determ(arrival_time[0]))
fork = Fork(model, "Fork", '"'+str(part[i])+'"')
globals()["component_%i" % i] = OpenClass(model, '"'+str(part[i])+'"', fork, Dist.Determ(arrival_time[0]))
for j in range(1,(len(component))):
globals()["Join%i" % j] = Join(model, '"Join'+str(j)+'"', final, 2)
5.2.4) Define the machines and set their capacity.
for k in range (0, (len(machine))):
globals()['M%i' % (k+1)] = Station(model, '"'+str(machine[k])+'"', buffer_size=capacity[k])
5.2.5) Set the service times. All the Manufacturing tasks before the first assembly one, are executed on the first component (created ad-hoc as 'final product_Component' in the function data_processing). The successive tasks are executed on final (the final product).
count=0 #counts the number of machining operations done on the first component before starting the assembly process
while machine_type[i] != 'AssemblyTask':
for j in range(0,count): #the machines process component_1
exec('M' + str(j + 1)+'.set_service('+ str(component[0])+', Dist.Determ('+str(time[j])+'))')
for k in range (count, (len(machine))): #the machines process final
exec('M'+str(k+1)+'.set_service(final, Dist.Determ('+str(time[k])+'))')
5.2.6) Define the routings. Until the first assembly task, the first component is forked from final and processed by all the required machines. If the first station executes a manufactuirng task, then the fork for the first components is connected to a machine, otherwise it is connected to the first join.
nMachine=0 #tracks of machines and joins for which the routing is already defined
source.add_route(final, fork, 1)
for i in range(1, len(component)): # Define the fork entity for each component except the first one
if machine_type[0] != 'AssemblyTask': #define the fork entity for the first component
exec('fork.add_route('+str(component[0]) + ', M1, 1)')
while machine_type[nMachine] != 'AssemblyTask': #set the routing of the first component
if machine_type[nMachine] == 'AssemblyTask': #(always true once the while cycle ends) define the join entity for the first component
exec('M' + str(nMachine) + '.add_route(' + str(component[0]) + ',Join1,1)')
while nJoin<len(assembly) and assembly[nJoin-1]==assembly[nJoin]:
exec('Join' + str(nJoin-1) + '.add_route(final,Join'+str(nJoin)+', 1)')
exec('Join' + str(nJoin) + '.add_route(final, M' + str(nMachine+1) + ', 1)')
nJoin=nJoin + 1
else: #define the fork and join objects for the first component
exec('fork.add_route('+str(component[0]) + ',Join1,1)')
while nJoin<len(assembly) and assembly[nJoin - 1] == assembly[nJoin]:
nJoin = nJoin + 1
exec('Join' + str(nJoin-1) + '.add_route(final,Join' + str(nJoin) + ', 1)')
exec('Join' + str(nJoin) + '.add_route(final, M' + str(nMachine + 1) + ', 1)')
All the other components are forked by final and their routing includes only one join each because, after the first join object is visited, all the machines process final. Check if the routing is defined till the last station, if so then the last station is connected to the sink, otherwise continue.
if len(machine)-nMachine==0: #if only the last station remains then it is connected to the sink
exec('M' + str(nMachine) + '.add_route(final, sink, 1)')
while len(machine)-nMachine!=0: #define the routing for final until the second-last station
if machine_type[nMachine] == 'AssemblyTask':
while nJoin<len(assembly) and assembly[nJoin-1] == assembly[nJoin]:
nJoin = nJoin + 1
exec('Join' + str(nJoin-1) + '.add_route(final,Join' + str(nJoin) + ', 1)')
exec('M' + str(nMachine) + '.add_route(final, Join' + str(nJoin) + ', 1)')
exec('Join' + str(nJoin) + '.add_route(final, M' + str(nMachine) + ', 1)')
elif machine_type[nMachine] != 'AssemblyTask':
nMachine=nMachine + 1
exec('M' + str(nMachine) + '.add_route(final, sink, 1)') #connect the last station to the sink
  1. 1.
    Run the simulation.
    path = model.write_jsimg()
    ret = model.solve_jsimg()


A folder named "nameof_model""date"_"time" is created.
It contains an input file .jsimg for the simulation in JSIM, a file .jsim with the results of the simulation and a file .csv containing the log of the simulation.
The LOG table is created as shown below. The columns indicate in order: Loggername: referred to the name of the station Timestamp Job_ID: referred to the number of the job Class_ID: referred to the name of the part type.
Among the evaluated performances, the throughput can be visualised as below: