Selectorweb.com New York
home > SQL_sybase Email
Sybase SQL Server 11

On this page:
  * intro
* freq.used commands
* select
* join
* select into, insert, update, delete
* datatypes
* views and temp.values
* rules and defaults
* indexes and constraints
* t-sql constructs
* triggers
* stored procedures
* transactions
* sys.tables
* cursor
* script to copy data
* misc
* SQL tuning

 
Introduction home - top of the page - email

Sybase SQL Server 11
 - Has a Single-Process, Multi-threaded Database Engine
 - Includes a query optimizer
Standard SQL:
    * DDL (Data Definition language),
    * DML (Data Modification langiage),
    * DCL (Data Control Language)
Transact-SQL = SQL + flow-control (if and while), local variables, etc..

4 main verbs:  select, insert, update, delete.

But even before these words - here are some often used commands (working with Sybase:


 
FUC - Frequently Used Commands home - top of the page - email

isql -Sserver -Uuser -Ppassword
1> use some_database
1> select db_name()
1> select @@servername
1> select user_name()
1> sp_helpdb
1> sp_helpdb dbname
1> sp_help
1> sp_help tablename
1> set rowcount 2                  -- to limit number of rows
1> set rowcount 0                  -- remove rowcount limitation
1> select * from sometable where 1=2            -- to see column names
1> vi                   -- to edit command
1>select name from sysobjects where type = "V"
1> quit

Here are useful help commands:
select name from sysobjects
where name like "sp_help%"

name
1 sp_help
2 sp_help_rep_agent
3 sp_help_resource_limit
4 sp_helpartition
5 sp_helpcache
6 sp_helpconfig
7 sp_helpconstraint
8 sp_helpdb
9 sp_helpdevice
10 sp_helpextendedproc
11 sp_helpexternlogin
12 sp_helpgroup
13 sp_helpindex
14 sp_helpjoins
15 sp_helpkey
16 sp_helplanguage
17 sp_helplog
18 sp_helpobjectdef
19 sp_helpremotelogin
20 sp_helprotect
21 sp_helpsegment
22 sp_helpserver
23 sp_helpsort
24 sp_helptext
25 sp_helpthreshold
26 sp_helpuser

 
SELECT home - top of the page - email

select <column-list> from <table-name>
select au_lname, au_fname from authors
select title_id, type, price, price * .1 from titles
select * from publishers
  (not recommended)

string concatenation:
select au_lname + ", " + au_fname from authors

name ouput columns yourself - column alias
select title_id, type, price "original price", price * .1 discount from titles
select "Full Author Name" = au_lname +", " + au_fname from authors

remove duplicates with distinct:
select distinct type from titles
select distinct city, state from authors
  (here distinct refers to a combination of city and state, so that each column by itself may have duplicate entries)

filtering rows with where
select <column-list> from <table-name> where <condition>
select au_lname, au_fname from authors where state="CA"

equality and inequality operators: = , <>  or != , > , >= , < , <= , !< , !>
select type, title_id, price from titles where price * total_sales < advance
(can be applied to string  comparison - default sorting order is ASCII)

logical OR and AND
select au_id, city, state
from authors
where state="CA" or city="Salt Lake City"

between and Ranges of Data:
<expression> between <expression> and <expression>
select title_id, price from titles where price between $5 and $10
equivalent to
select title_id, price from titles where price >= $5 and price <= $10

not between:
select title_id, price from titles where price not between $5 and $10
equivalent to
select title_id, price from titles where price >= $5 and price <= $10

in (...) :
select title_id, price from titles where type in ("mod_cook", "trad_cook", "business")
equivalent to
select title_id, price from titles where type = "mod_cook"
  or type = "trad_cook" or type = "business"

not in (...)
select title_id, price from titles where type not in ("mod_cook", "trad_cook", "business")

wildcards with like:
% - any number (0 to many) of any characters
_ (underscore) - any single character
[ ]   any single character from those listed in the brackets (this is only for Sybase)
[%[ - actually match the % character
[^A-C] - matches any character except A,B,C

select au_lname, au_fname, city, state from authors where city like "Spring%"
select type, title_id, price from titles where title_id like "B_1342"
select type, title_id, price from titles where title_id like "B[A-Za-z0]1342
     Note - if you need to include the '_' character in your pattern - use 'escape' word, for example:
            select * from titles where title_id like 'ABC\_%' escape('\')

ordering result sets with order by:
select au_lname, au_fname from authors order by au_lname
select au_lname, au_fname from authors order by au_lname, au_fname

order by position in the select list:
select title_id, price, total_sales, price*total_sales "total dollar sales"
from titles
order by 4

Ascending and Descending Ordering
select title_id, price, total_sales, price*total_sales "total dollar sales"
from titles
order by price*total_sales desc

default sort order is ascending. Example: sort by type (ascending) and then by total_sales (desc):
select title_id, price, total_sales
from titles
order by type, total_sales desc

order by columns not Appearing in the Select List:
select au_lname, au_fname from authors order by city, state

agregate functions:
sum( ) - total numeric,
avg( ) - average numeric,
min( ) - lowest numeric or sorting string or earliest date,
max( ) - highest numeric or sorting string or latest date,
count( ) - returns the number of non-null expressions,
count(*) - returns number of rows found

select avg(price) from title
select avg(price) "avg" from titles where type = "business"

select avg(price) "avg", sum(price) "sum" from titles
where type in ("business","mod_cook")

counting rows with count(*):
select count(*) from authors where state="CA"

agregates functions discard null values

sub-agregates with group by:
select type, avg(price) "avg", sum(price) "sum" from titles
where type in ("_business", "mod_cook")
group by type

When two or more columns are included in group by statement, agregates are based on unique combinations of these columns:
select type, pub_id, avg(price) "avg", sum(price) "sum" from titles
where type in ("_business", "mod_cook")
group by type, pub_id

In order for aggregates to properly subtotal (or subaverage or subcount) by non-aggregate values, all non-aggregate columns in the select list should be repeated in the group by clause (see color above).

Filtering results with having:
where - selects rows before averaging:
select type, avg(price) from titles where price > $10 group by type
having select rows from the result set:
select type, avg(price) from titles where price > $10 group by type having avg(price) > $20

Example: find duplicates of au_id:
select au_id, count(*) from authors
group by au_id
having count(*) > 1

Worktable - a temporary table which is created before distinct, having or order by are applied.

select type, avg(price) from titles where pub_id="1289"
group by type
having avg(price) > $15
order by avg(price) desc

 
JOIN home - top of the page - email

JOIN Operations:
join using common key (or join key):
select title, pub_name
from titles, publishers
where titles.pub_id = publishers.pub_id

select title, pub_name

from titles t
inner join publishers p on t.pub_id = p.pub_id

select au_lname, au_fname, title
from authors a, titles t, titleauthor ta
where ta.title_id = t.title.id
and a.au_id = ta.au_id
and type="psychology"

select t.*, pub_name
from titles t, publishers p
where t.pub_id = p.pub_id

Outer join: left (*=) or right (=*):
select pub_name, title from publishers p, titles t
where p.pub_id *= t.pub_id
and ...

select pub_name, title from publishers p
left outer join titles t on
p.pub_id = t.pub_id
where ...

This query retrieves all pub_names from publishers.  For any row that doesn't join successfully with titles, the title column will contain a null for that row in the output.

Subqueries - to be used instead of a constant:
select title from titles
where pub_id =
  ( select pub_id from publishers where pub_name = "Algodata Infosystems")

Subqueries with IN (or NOT IN) can return multiple rows:
select pub_name from publishers
where pub_id in
  (select pub_id from titles where type = "business")

subquery can be rewritten as a JOIN:
select pub_name from publishers p, titles t
where p.pub_id = t.pub_id and type = "business"

Subqueries with exists - the best way to eliminate duplicates:
select pub_name from publishers p
where exists
  (select * from titles t where p.pub_id = t.pub_id and type = "business")

not exists and not in :
select pub_name from publishers p
where not exists
  (select * from titles t where p.pub_id = t.pub_id and type = "business")

Unfortunately the "not exists" construct runs very slowly.
A better (much-much faster) way (in 12.5) is to use the new syntax:

select pub_name from publishers p
left outer join titles t on p.pub_id = t.pub_id and t.type = "business"
where t.pub_id is null

This following syntax I don't like - create problems (char/varchar, nulls):

select pub_name from publishers
where pub_id not in
  (select pub_id from titles where type = "business")

Subqueries with agregates in the where clauses:
select type, price from titles
where price < (select avg(price) from titles)

union operations:
select city, state from authors
union
select city, state from publishers

(duplicates will be automatically removed from the result unless you use union all )

Here is one more example:
select 95 "year", month, sum(dollar_sales)
from salescurrent
group by month
union
select 94 "year", month, sum(dollar_sales)
from salescurrent
group by month
order by year, month

 
select into, insert, update, delete home - top of the page - email
Using select into:
creates a table on the fly instead of returning the the output to the user:
select distinct type
into type_lookup
from titles

Example: create an empty copy of an existing table:
select * into new_salesdetail
from salesdetail
where 1 = 2

Adding rows with insert:
insert authors (au_id, au_lname, au_fname, phone)
values ("123-45-6789","Jones","Mary","212-555-1212")

Columns that are not specified will be set to their default values
If you don't specify column names at all (which is very bad practice and dangerous) - then you have to provide values for ALL columns an in the right order:

insert publishers
values ("1235","New World Printing","GA","Atlanta")
 

inserting several rows with select:
insert authors_archive (au_id, au_lname, au_fname, phone, city, state, zip)
select au_id, au_lname, au_fname, phone, city, state, zip from authors
where state="CA"

modifying rows with update:
update publishers
set pub_name = "Joe's Press"
where pub_id = "1234"

Example: update based on a join:
update titles
set full_name = isnull(u.full_name,'')
from db1..titles tit, db2..users u
where tit.kid *= u.kid

Example: double all prices in the table:
update titles
set price=price*2

update several columns simultaneously:
update contacts
set address1 = "5 W. Main St.",
     address2 = "Apt.3D",
     state="CT",
     zip="03838"
where contract_id = 17938

remove rows with delete:
delete titles
where type = "business"

Clearing the table:
delete title
  or
truncate table titles

truncate is faster, but it doesn't log individual deletions - thus the ability to recover database is compromised.

 
Datatypes home - top of the page - email

Datatypes:
Once you create a column of certain datatype - you can not  change it without dropping and re-creating the table ????

create table my_table (
   id int not null
   value float not null
   description varchar(30) null
)

Storing strings:
char - fixed length strings  (max length - 255)
varchar - variable-length strings  (max length - 255)
nchar - multi-byte character strings
nvarchar - variable-length multi-byte char strings
text - for very large strings (up to 2 GB of text per row)

Storing binary data:
binary - for fixed-length binary strings
varbinary - for variable-length binary strings
image - for storing large binary strings (up to 2 GB of image data per row)

insert b_table (id, bin_col) values (19, 0xa134c2ff)

using timestamp:
create table timestamp_example
 ( id int not null,
   code char(3) not null,
   ts timestamp not null )

insert timestamp_example (id, code) values (17, "AAA")

The server uses the unique transaction-log-row id as a timestamp. You can not specify this value.

Optimistic locking:
using tsequal() function to check the locking

Storing BLOBs (binary large objects) using text and image datatypes:
create table texts (
  id numeric(5,0) identity,
  item int not null,
  textstring text null
)

using insert:

insert texts (item, textstring) values (1, null)
insert texts (item, textstring) values (2)
insert texts (item, textstring) values (3, "some text")
insert texts (item, textstring) values (4, "some very long text")

insert statement will not insert more than 1,200 bytes.
string concatenation on strings longer than 255 chars is illegal

using writetext and readtext:
The idea is to get the pointer to the text chain and to write directly to this chain without modifying the underlying row.
To do this you should start with a non-null popinter stored in the row itself.

Here is how to retrieve the pointer for a row:
select id, textptr(textstring) textptr,
  datalength(textstring) datalength
from texts

to write a new textstring value to the 3rd row:
declare @pageptr varbinary(16)
select @pageptr = textptr(textstring)
  from texts
  where id = 3
writetext  texts.textstring @pageptr "some very long string"

to read 50 bytes starting from position 4000:
declare @pageptr varbinary(16)
select @pageptr = textptr(textstring)
  from texts
  where id = 3
readtext  texts.textstring @pageptr 4000 50
textstring

Date/Time datatypes:
datetime  (3 millisecons, 8 bytes)
smalldatetime (1 min, 4 bytes)

create table date_example (
  id int not null
  dateval  datetime not null)

insert date_example (id, dateval) values (19, "September 25, 1996 3:15PM")

you can use other forms of format, for example:  4/15/99

select * from date_table where date = "9/3/95"
select * from date_table where date >= "9/3/95" and  date < "9/4/95"

timestamp - a binary, ticks of some sort, it is not datetime

drop table #test

create table #test (
 id numeric(18,0) identity,
 ts timestamp,
 dt datetime,
 sm smalldatetime
)

 insert #test(dt,sm) values (getdate(), getdate())

 select * from #test

datediff(dd,"15-Apr-2000","15-Apr-2001") - returns '365'

where LastActivityDate > dateadd(yy,-2,getdate())
where LastActivityDate > dateadd(dd, -90, getdate())
 

Logical datatype: bit - can not be indexed and can not be null:
create table bit_sample (id int not null, description varchar(30) null, active bit not null)

Numeric datatypes:
int - 4 bytes signed
smallint  - 2 bytes signed
tinyint - unsigned byte (0..255)
float - 8 bytes
real  - 4 bytes
numeric(7,2) - you can indicate total number of digits and number of digits after decimal point. Can be used with identity (automatic counter)
decimal - similar to numeric, but can't be used with identity
money - 8 bytes
smallmoney - 4 bytes (+/-214,748.3647)

create table numeric_example (
  id numeric(8,0) identity,
  num_col numeric(7,2))

insert dollar_table (id, dollars) values (95, $12345.93)

=================================================================================
create table table_name ( ...)
drop table table_name
sp_help
sp_help table_name

object's full name:
database_name.owner_name.object_name

pubs.dbo.authors

Columns can have following properties: Null, Not null, Identity

 
Views and Temporary Tables home - top of the page - email
-
Views - logical way of looking at data (you can treat them as tables):
create view author_name as
  select last = au_lname, first = au_fname from authors

select * from author_name
drop view author_name

* note: dropping a view doesn't change data
* views - help security by limiting access to tables
* views - help make queries simpler
* views can contain aggregate functions and grouping, joins, other views and a distinct clause.
* views CAN NOT include "select into" , a compute clause, a union clause, and "order by" clause.
* You can insert into, update and delete from views (restriction: allowed to affect only one base table).
* If view includes columns from several tables, you can't delete rows from view or update columns from more than 1 table in a single update statement
* You can not update, delete or insert into a view containing the distinct clause.

sp_help - lists all objects (including tables and views)
to get a list of just the views:
select name from sysobjects where type = "V"

to see a list of columns in a view:
sp_help  view_name

Renaming objects (tables, views, columns.) using  sp_rename:
sp_rename old_name new_name
sp_rename 'table_name.old_col_name' , 'new_col_name'

Adding columns to a table using alter:
alter table tt add
  middle_name varchar(20) null,
  fax varchar(15) null

NOTE: You can only add columns with null values.
NOTE: You can't remove columns using alter. You have to recreate the table and indexes and insert data again, for example:
create table lev1 (c1 int not null,c2 int null)
CREATE UNIQUE CLUSTERED INDEX lev1_idx ON lev1(c1)
CREATE NONCLUSTERED INDEX lev1_qib_idx ON lev1(c1,c2)

insert lev1 values (1,2)
insert lev1 values (2,2)
insert lev1 values (3,2)

-------  add columns  -----------------
alter table lev1 add c3 int null, c4 int null
-------  remove columns  -------------
exec sp_rename 'lev1','lev1_tmp'
-- sp_rename 'lev1_tmp.lev1_idx', 'lev1_tmp_idx'
-- sp_rename 'lev1_tmp.lev1_qib_idx', 'lev1_tmp_qib_idx'

create table lev1 (c1 int null, c2 int null)
CREATE UNIQUE CLUSTERED INDEX lev1_idx ON lev1(c1)
CREATE NONCLUSTERED INDEX lev1_qib_idx ON lev1(c1,c2)

insert into lev1 select c1,c2 from lev1_tmp
drop table lev1_tmp   -- this will automatically drop indexes

Temporary Tables - are real tables created in the tempdb database.  They have '#' before table name.  They exist only for the duration of a user session (or stored procedure) and only accessible to this session. These tables are not recoverable (in case of a crush).
select au_lname, au_fname, title, pub_id
  into #titles_and_authors
  from authors a, titleauthor ta, titles t
  where a.au_id = ta.au_id and t.title_id = ta.title_id

Creating permanent Temporary table in tempdb:
   select * into tempdb..titles from pubs2..titles

 
Rules & Defaults home - top of the page - email
-
Rules:
create rule order_quantity as
  @quantity between 100 and 150000

create rule color_rule as
  @color in ('black', 'brown', 'red')

create rule pub_id_rule as
  @pubid like ('99[0-9][0-9]')
  or @pubid in ('0736', '0877', '1389')

create rule date_rule as
  @date >= getdate()

drop rule key_rule

Binding rule to a column:
sp_bindrule rule_name, 'table.column_name'
sp_unbindrule 'table.column_name'

Defaults:
create default country_default as 'USA'
create default age_default as 16
create default time_default as getdate( )

binding default to columns in a table:
sp_binddefault default_name, 'table.column_name'
sp_bindefault country_default, 'demographics.country'
sp_unbindefault  'demographics.country'

you can define default for a column when you create a table.
later you can change it using later table:
alter table items
  replace price default null

alter table items
  replace item_code default "N/A"

To list all rules and defaults in a database:
select name from sysobjects where type in ("R", "D")

To examine the rules and defaults bound to the column in a table:
sp_help table_name

sp_helptext object_name

Creating User-defined Datatypes:
sp_addtype ssn_type, 'char(9)', "not null"

 
Indexes and Constraints home - top of the page - email
-
Indexes:
2 types of indexes - clustered and nonclustered. Both are B-tree.
Clustered - only one clustered index per table can exist (data is maintained in clustered index order).
Nonclustered indexes - you can have 249 of them per table. They maintain pointers to rows (not data pages).
An index can contain up to 16 columns, but toal index width <= 255.

Creating indexes:
create unique clustered index name_index  on authors (au_lname, au_fname)
create index fname_index on authors (au_fname, au_lname)
***********************-
Constraints:
Primary Key and Unique Constraints
Check Constraints
Referential-integrity Constraints
Primary-Key Constraints
Foreign-Key Constraints
Modifying Constraints
Adding Constraints
Removing Constraints
Information on Constraints:
sp_helpconstraint
***********************-
Comparing Data-integrity Methods:
Rules, Defaults, Indexes, Constraints, Keys (Primary Key, Foreign Key (primary key from another table), Common key)

 
T-SQL programming constructs home - top of the page - email
-
T-SQL programming constructs:
standard arithmetic operations ( + - * / ^)
string functions: datalength(), substring(), right(), upper(), lower(), space(), replicate(), stuff(), reverse(), ltrim(), rtrim(), ascii(), char(), str(), soundex(), difference(), charindex(), patindex()

note: to include quotes into string you can repeat it 2 times (or one time if the surrounding quote is of different type):

"    ""     "
'     ''     '
"    '       "
'    "       '
   'I don''t know'      "I don't know"      " Title ""The Title""  was on sale"        ' Title "The Title"  was on sale'

math.functions: abs(), ceiling(), exp(), floor(), pi(), power(), rand(), round(), sign(), sqrt()
date functions: getdate(), datename(), datepart(), datediff(), dateadd()

convert - to convert data types:
select "Advance = " + convert (char(12), advance) from titles
(can also convert dates)

system functions:  host_id(), host_name(), suser_id(), suser_name(), user_id(), user_name(), user, show_role(), db_id(), db_name(), object_id(), object_name(), col_name(), col_length(), index_col(), valid_name(), datalength(), tsequal()

compute, compute by:
allows to include both detail and summary info in a single result set

isnull - function tells to treat null values as zero or something else.

Batches:
  several commands followed by 'go' command
Comments:  /*  */    or *
Local variables:
   declare @variable_name datatype
   declare @myname varchar(30), @type int
   select @myname=12
   go

note: name length is limited to 30 char (including @)

Global variables - can not be defined by users.
 (examples:  @@rowcount, @@error, @@trancount, @@transtate, ...)

Message handling:
  print "this is a message"
  declare @msg varchar(30)
  select @msg = "Hello " + user_name()
  print @msg
Error handling:
   raiserror 52345 'Row not found'
if .. else:
if (select ...)
  print "sometext"
else
  print "some_other_text"
while:
while(select ...)
  begin
    update ...

    continue
    break

  end
goto:
metka: select ...
...
goto metka
waitfor:
waitfor time "22:00:00"
while 1<2
begin
  waitfor delay "00:00:30"
  exec sp_who
end
return:
to exit the batch
set:
set rowcount 100
set statistics time on
set nocount on
...
Cursors - allow custom processing for each row:
declare leads_curs cursor for
select cust_id, ets-sale  from leads
for read only
go

open leads_curs
while () begin .. end
close leads_curs
deallocate leads_curs
go

You can do also custom updating with cursors.
In general you should avoid cursors - try to substitute them sith complex select statements (usually will run x10 faster)

 
Triggers home - top of the page - email

***************-

create trigger [owner.]trigger_name
on [owner.]table_name
{for {insert, update, delete}
as SQL_statements

Or, using the if update clause:

create trigger [owner.]trigger_name
on [owner.]table_name
for {insert, update}
as
[if update (column_name ) [{and | or} update (column_name)]...]
   SQL_statements
[if update (column_name) [{and | or} update (column_name )]...
   SQL_statement ]
...

Example:

create trigger titles_trigger
  on titles
  for insert
as
print "title inserted"
return
***************-
drop trigger trigger_name
***************-
create trigger cascade_del_trigger
  on publishers
  for delete
as
if @@rowcount = 0 -- no rows deleted, exit trigger
  return
delete titles from titles t, deleted d where t.pub_id = d.pub_id

if @@error != 0
  begin
  print "Error occurred during deleting related titles"
  end

return
***************-
create trigger tr1 on ... for insert, update as
declare @rows int
select @rows = @@rowcount
...
if (.. )
  ...
***************-
In triggers you CAN NOT do these:
  create, drop, alter table/database, grant, revoke, select into, truncate table, update statistics, reconfigure, load database/transaction, disk init/mirror/reinit/refit/remirror/unmirror
***************-
Triggers During transactions:
...
begin tran add_titles
insert ...
insert ...
commit tran
***************-
rollback trigger - command to rollback the trigger

Nested Triggers - 16 levels of nesting max ,

 
Stored procedures home - top of the page - email
Stored Procedures:
* faster (precompiled in memory)
* reduced network traffic
* modular programming
* restricted access to tables
* reduced operator error
* enforced consistency
* automated complex or sensitive transactions
***************-
create proc pub_titles as select .. from ... where ... return

To execute a stored procedure:
call by name if it is a first command in the batch
or use "exec" (or "execute" command):
  exec proc_name

To delete the procedure:
drop proc proc_name

To change the procedure:
if exists (select * from sysobjects where name = "titles_for_a_pub" and type = "P" and uid=user_id())
  drop proc titles_for_a_pub
go
create proc titles_for_a_pub as ...

To display the text of the procedure:
sp_helptext proc_name

passing parameters:
create proc tt (@pub_name varchar(40)) as
select ...where ...
return

declare @my_pname   varchar(40)
select @my_pname ="Some text"
exec tt @my_pname

Default Parameter Values:
create proc proc_name
   p_name datatype = default_value
   , ....
as
SQL Statements
return ...

----- Output parameters:
create proc tt (
   @par1 varchar(80) = null,
   @par2 int output)
as
-- some T-SQL statements go here
return

declare @myoutput int
exec  tt
   @par1 = 'sometext',
   @par2 = @myoutput  output
go

----- Returning procedure status:
return some_int_value
0 - no errors
*1 .. -99 - detected errors (page 194 - list of error status codes)
other values - whatever you like

declare @staatus_var int
exec @staatus_var = proc_name

Note: A stored procedure may NOT: create views, defaults, rules, triggers or procedures, or issue the use statement
Note: If stored procedure references a table, and the table was changed - you may have to drop procedure, change it and re-create it.
When created, stored procedure is transformed into a cashed sequenced of SQL statements. If procedure has "with recompile" - it forces the optimizer to generate a new query plan for each execution.

----- Returning data from one SP into a caller-SP:
-----------------------------------
----- transparent passing the result value
-----------------------------------
create proc test1 as select count(1) N from OE
create proc test2 as exec test1
exec test2
-----------------------------------
----- catch the output value in the caller SP:
-----------------------------------
create proc test3(@a int, @b int output) as select @b = @a*@a

create proc test4 as
begin
  declare @par_in int
  declare @par_out int
  declare @ret_status int
  select @par_in = 3
  exec @ret_status = test3 @par_in, @par_out output
  if not @ret_status=0 begin select @par_out = 0 end
  select @par_out
  select @par_out+1
end

exec test4
-----------------------------------
----- transparent passing the result set
-----------------------------------
create proc test5 as select * from mytab
create proc test6 as exec test5
exec test6

-----------------------------------
----- handling errors, transactions
-----------------------------------
create proc mysp
as
begin
   declare @tran_cnt int
   declare @errs int
   declare @rows int
   select @tran_cnt = @@trancount

   if @tran_cnt = 0
       begin transaction mytran
   else
       save transaction mytran

   -- do something useful
   -- then check for errors
   select @errs = @@error
   if (@errs != 0)
   begin
     rollback transaction sbl_upd_ticker
     return @errs
   end

   select @loc=loc, @idx=idx from mytab
   if not @@rowcount=1
   begin
      rollback transaction mytran
      raiserror 30200
      return 30200
   end

   if @tran_cnt = 0
       commit transaction mytran

   select @my_id = (@loc * 10000000) + @idx
   return 0
end

Objects referenced in a procedure:
exec sp_depends proc_name
exec sp_depends table_name

 
Transactions home - top of the page - email
Transactions - several statements grouped together and either commited or rollbacked all together
(all individual steps are recorded into a transaction log for recovery in case of a crash)
begin tran
save tran
rollback tran
commit tran
set chained
   @@tranchained  0 - chained mode is off, 1 - chained mode is on

begin transaction
   update something
   insert something
if ...
  begin  ... commit transaction end
else
  begin ... rollback transaction end

Note: transactions put locks on pages (2K blocks of data)

savepoints inside transaction:
save tran item999
...
rollback tran item999

Nested Transactions:
@@trancount

stored procedure with a transaction inside
create proc p1
as
declare @trncnt int
select @trncnt = @@trancount     -- save value
if @trncnt = 0
  begin tran p1
else
  save tran p1

/* do some work here */

if (@@transtate = 2)  -- or other error condition
  begin
  rollback tran p1
  return 25    -- error code indicating rollback
  end

/* some more processing if required */

if @trncnt = 0
  commit tran p1

return 0  -- success

nesting transactions - batch with transaction calls a stored procedure with a transaction inside
  .....

declare @status_val int,  @trncnt int
select @trncnt = @@trancount     -- save value
if @trncnt = 0
  begin tran t1
else
  save tran t1

/* do some work here */

if (@@transtate = 2)  -- or other error condition
  begin
  rollback tran t1
  return     -- exit batch 
  end

execute @status_val = p1  -- call procedure with transaction from inside transaction - thus nesting transactions

if @status_val = 25  -- if proc p1 performed rollback
  begin
  rollback tran t1
  return
  end

if @trncnt = 0
  commit tran t1 -- decrements @@trancount to 0

return 

triggers may be part of a transaction:

begin tran
  update titles
    set price=$99
    where title_id = "BU1234"
commit tran

rollback transaction in a trigger
rollback trigger

Transation modes:

- chained mode - This is ANSI standard. You are always in a transaction. You never have to issue BEGIN TRANs because it is issued implicitly as soon as you connect, and then as soon as you finish previous transaction. So as soon as you issue explicit COMMIT TRAN or ROLLBACK TRAN - you will get into a new transaction. It looks strange - you only close transactions with commit or rollback - and never have to open it, because it is always already opened for you.

- unchained mode (default for Sybase) - Each individual statement is atomic. If you want to put several statements in one transactions - you have to explicitly issue BEGIN TRAN and at the end do COMMIT or ROLLBACK.

Stored procedures remember in which mode they were compiled - and will not run in a different mode - unless you define them with "anymode".


 
Getting info from sys. tables home - top of the page - email

some examples using sys tables

--------------------------------------------------------
select name from sysobjects
where sysstat & 7 = 3
order by name
--------------------------------------------------------
select
  rowcnt(t2.doampg),
  (reserved_pgs(t2.id, t2.doampg) + reserved_pgs(t2.id,t2.ioampg)),
  (t3.low), data_pgs(t2.id, t2.doampg), data_pgs(t2.id,t2.ioampg),
  (
     (reserved_pgs(t2.id, t2.doampg) + reserved_pgs(t2.id,t2.ioampg)) -
     (data_pgs(t2.id, t2.doampg) + data_pgs(t2.id,t2.ioampg))
  )
from sysobjects t1, sysindexes t2, master.dbo.spt_values t3
where t2.id = object_id('myTableName')
  and t1.id = object_id('myTableName')
  and t3.number = 1 and t3.type = 'E'
-------------------------------------------------------
select t1.name, t2.name, t1.length, t1.status
from syscolumns t1, systypes t2
where t1.id = object_id('my_name')
and t1.usertype *= t2.usertype
--------------------------------------------------------
select t1.indid, t1.name
from sysindexes t1
where t1.id = object_id('my_name')
and t1.indid > 0 and t1.indid < 255
--------------------------------------------------------
select index_col('my_name', indid_buf, keyid_buf, ...)
--------------------------------------------------------
select t1.name
from master.dbo.spt_values t1, sysindexes t2
where t2.status & t1.number = t1.number
and t2.id = object_id('my_name')
and t2.indid = my_some_indid_buf
and t1.type = 'I'
and t1.number = 2
--------------------------------------------------------
select t1.name
from syssegments t1, sysindexes t2
where t1.segment = t2.segment
and t2.id = object_id('my_name')
and t2.indid = my_some_indid_buf
--------------------------------------------------------
select replstat = convert(bit, (t1.sysstat & -32768))
from sysobjects t1
where t1.id = object_id('my_name')
--------------------------------------------------------

 
Stored procedure with a cursor home - top of the page - email

if exists (select 1 from sysobjects
where name = "myname")
   drop proc myname
go

create proc myname
as
begin
  declare @my_id int
  declare @root_my_id int
  declare mycursor cursor for (select my_id from MT)
  open mycursor
  fetch mycursor into @my_id
  while ((@@sqlstatus = 0))
  begin
    exec @root_my_id = MT_get_root @my_id
    insert into MT_root_map_tmp (my_id, root_my_id)
    values (@my_id, @root_my_id)
    fetch mycursor into @my_id
  end
  close mycursor
  deallocate cursor mycursor
end
go
 


 
script to copy data between databases home - top of the page - email

You can copy data between databases using migrate utility in DBArtisan.
Or you can use this script (it accepts table name as a parameter):
#!/bin/sh -v

if [ $# != 1 ] 
then 
    echo "Usage: $0 <table>" 
    exit 1
fi 

table=$1 

PROD="-S SERV1 -U user1 -P pass1"
DEV="-S SERV2 -U user2 -P pass2"

bcp mydb..$table out $table.dat $PROD -n > /dev/null

isql $DEV -w250 <<EOF
use mydb
go
delete $table
go
EOF

bcp mydb..$table in  $table.dat $DEV -n > /dev/null

isql $DEV -w250 <<EOF
use mydb
go
update statistics $table
go
EOF


 
misc home - top of the page - email

count distinct types:
 
select dept, count(*) from mytable
where dept is null
group by dept
union
select distinct dept, count(*) from mytable
where dept is not null
group by dept
select distinct dept, count(isnull(dept,'0')) 
from mytable
group by isnull(dept,'0')
when you joining tables - they join only on not null values.

The common situation is joining 2 tables through a middle table which maps keys between the side 2 tables. This works good for normal inner joins: 
select ... where A.x = B.x and B.y = C.y

But what if you want all values from A - and just fill with nulls other columns. You try left join - but it doesn' work:
select ... where A.x = B.x and B.y = C.y  -- doesn't work

You can either do it in 2 steps (make a temporary table) - or use union:

 

example of a union:

select #t1.* 
from #t1,#t2 
where #t1.bb = #t2.bb
union
select t1.* 
from #t1 t1
where not exists (
    select *
    from #t2 t2 
    where t1.bb = t2.bb
  )

Getting date difference as part of the year in Sybase

-- select convert(numeric(6,2), datediff(mm, '4/1/02', '1/1/03'))/12.00
-- select convert(numeric(6,2), datediff( dy, '4/1/02', '1/1/03' ))/365.00
-- select convert(numeric(6,2), datediff( dy, '4/1/02', '7/1/02' ))/365.00

Catching special characters:

select count(*) from mytab where charindex(char(10), myname) !=0

The simple way to find all occurencies of special characters is to use a short perl script.
Below is the code (do_query is a self-made subroutine returning the result set as an array of hashes):
 
my ($sql,$result,$row,$oe_id,$name,@ar,%hh);
$sql = qq{ 
  select id, name from mytab 
  where name like '%[^0-9A-Za-z -_.,''""%]%' }; 
$result = &do_query($dbh, $sql);
for $row (@$result) {
  $id = $row->{id};
  $name = $row->{name};
  @ar = unpack('C*', $name);
  %hh = ();
  for(@ar) {if($_<32 or $_>127){$hh{$_}++}}
  for(sort keys %hh){print"id <$id>: code $_ - $hh{$_} times\n"}
}

declare @a varchar(30)

select @a = "a b"
select @a = str_replace(@a, char(32), rtrim(''))
select @a
-- the above returns 'ab' in Sybase 12.5.1 and later, but returns NULL in 12.5.0, which is a bug
-- The alternative is to use a while loop:
declare @ss varchar(10) select @ss="a b"
while (charindex(" ",@ss) > 0) select @ss = substring(@ss, 1, charindex(" ",@ss)-1) + substring(@ss, charindex(" ",@ss)+1, 255)
select @ss

===========================

Using 'union' to do logical operations in single statement without creating temporary tables:

Let's say we have a table with 2 columns (a,b):

create table #t (a int null, b int null)
insert #t (a,b) values (-1,6)
insert #t (a,b) values (0,5)
insert #t (a,b) values (1,4)
insert #t (a,b) values (2,3)
insert #t (a,b) values (3,2)

and we  want to return max(a) or max(b) depending on which one is larger.
Here is how this could be done
 
 
select max(b) mm from #t 
having max(b) > (select max(a) from #t)
union
select max(a) mm from #t 
having max(b) > (select max(b) from #t)


 
tune your SQL home - top of the page - email

Query Tuning Options
set forceplan on  -  forces SQL Server to join the tables in a query in the order in which they are listed in the from clause.
set table count  -  allows you to specify the number of tables that are considered at once as joins are optimized.
sp_cachestrategy  -  disables and reenables cache strategies and large I/O for individual objects.
specify the index, cache strategy, or I/O size for a query (for example: select * from mytab (index mytab_idx) where ...)
set prefetch  -  allows you to enable or disable large I/O for a session.

Benchmarking your query by hand:
   set statistics time on
   set statistics IO on

  

Help your optimizer to make sure it has correct information about your table and uses it for queries and stored procedures. This info may me incorrect, for example, after bulk loads:
   update statistics  tabname
   sp_recompile  tabname

Examine the showplan of your query:
  set showplan on
  go
  set noexec on
  go

or for a stored procedure
   set showplan on
   go
   set fmtonly on
  go

Note: do not mix set noexec   &   set  fmtonly

Run your code - examine the output - look where table scans are made instead of using indexes. Sometimes index is not used because:
  -  it doesn't exist on the column used in the where clause or in a join
  -  there is a datatype mismatch in the join (like integer joined to numeric)
  -  if the query is not restrictive enough and returns significant portion of the table.  The optimizer then will choose to use table scan.
  -  the indexes are very complex - so Optimizer decides that it is less costly to just read the table itself
  -  some where cluases may cause problems: order by, group by, having, between, like, calculations in where clauses.
  -  sometimes table scan is really more effective and you don't need to eliminate it.

Note, that database can cashes data, so repeating the query may get the results faster.
Simple solution - vary the query.

dbcc traceon(302, 310, 3604)
sp_sysmon
etc.
 
Dynamic SQL home - top of the page - email

-- "dynamic SQL" in Sybase is called "execute-immediate" (EI)

exec("select getdate()")
declare @c varchar(25)
select @c = "select getdate()"
exec (@c)
select @c = 'mycolumn'
execute( 'select ' + @c + ' from mytable where 1=2' )
-- You can do line continuation like this:
exec (' \
create procedure myproc \
@part1 int \
as \
select KeyCol, Attrib \
from MyTable \
where KeyCol = @par1 \
')
-- Note:
-- temp tables (#mytemp) created inside are dropped on exit
-- local variables created outside can not be used inside - and vice versa
-- some commands are not allowed inside:
use, set, dbcc, connect, begin transaction, commit, rollback, dump transaction
-- can't call EI from inside another EI, inside a trigger,
-- statements executed in EI can't access tables "inserted" and "deleted"
-- and EI doesn't return the result status
-- cursor usage is limited (and pointless)
-- Passing info back from EI - via table
-- EI sets @@rowcount and @@error as any other SQL

Real-life example (extracted from the middle of a long stored proc):

declare @mysql varchar(1024)
select @mysql = "insert #client1cas (id_cas) "
select @mysql = @mysql + "select w.id_cas "
select @mysql = @mysql + "from jds_words_cas w (index IX_words) "
select @mysql = @mysql + "where w.words like '" + @client_str + "%' "
select @mysql = @mysql + " and w.id_cas != null and w.id_cas != '' "

set rowcount @max_mid_rows
execute(@mysql)
select @myrowcount = @@rowcount
set rowcount 0
if (@mydebug = 1) print "________ inserted %1! rows", @myrowcount
if(@rows_cas >= @max_mid_rows) begin select @many_clients_flag = 1 goto RETURN_RESULTS end

 

Getting Info About Rows and Sizes of Tables home - top of the page - email

select Table_Name=object_name(id),
"Rows"=convert(int,sum(rowcnt(doampg))),
"ReservedKB"=sum(reserved_pgs(id,doampg))*2+ sum(reserved_pgs(id,ioampg))*2,
"Data"=sum(data_pgs(id,doampg))*2 ,
"Index"=sum(data_pgs(id,ioampg))*2
from sysindexes
where id in (select id from sysobjects where type='U')
group by object_name(id)
Testing Outer Joins home - top of the page - email

When chaining joins,
>> , =>, <> are OK,
but
>= is not allowed (even on a different key)

-- testing outer joins
create table #t1 (c1 varchar(10) null, c2 varchar(10) null)
create table #t2 (c1 varchar(10) null, c2 varchar(10) null)
create table #t3 (c1 varchar(10) null, c2 varchar(10) null)
go
-- row.col
insert #t1 values('1.1','1.2')
insert #t1 values('2.1','2.2')
insert #t1 values('3.1','3.2')
insert #t2 values('1.1','1.2')
insert #t2 values('2.1','2.2')
insert #t2 values('4.1','4.2')
insert #t3 values('1.1','1.2')
insert #t3 values('2.1','2.2')
insert #t3 values('5.1','5.2')

select t1.*, t2.* from #t1 t1, #t2 t2 where t1.c1 = t2.c1
select t1.*, t2.* from #t1 t1, #t2 t2 where t1.c1 *= t2.c1

-- t1 < t2 > t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 =* t2.c1 and t2.c1 *= t3.c1

-- t1 > t2 > t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 *= t3.c1

-- t1 < t2 = t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 =* t2.c1 and t2.c1 = t3.c1

-- t1 > t2 < t3 - meaningless
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 =* t3.c1

-- t1 > t2 = t3 - not allowed
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 = t3.c1

-- t1 > t2 = t3 - not allowed even on a different key
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c2 = t3.c2

print, raiserror home - top of the page - email
-

===============================================
Example of using print with formatting string:
===============================================
declare @a varchar(20)
select @a = convert(varchar(20),getdate(),112)
print @a

declare @b varchar(20)
select @b = 'I am b'
print @b

print "MYMY %1! and %2! MYMY", @a, @b

===============================================
Few words about raiserror
===============================================
raiserror is similar to print command, but it also
sets the @@error global variable to a int nubmer.
Sybase has its own errors with numbers <20000.
(see their descriptions in table master..sysmessages)

select count(1) from master..sysmessages
select count(1) from master..sysusermessages

You can create your own custom errors - either in sysusermessages - or on the fly
Examples of usage

if @@error != 0
begin
raiserror 300004 , "some message"
return -1
end

raiserror 300002 , "MYMY %s MYMY ", @myvar
raiserror 300002 , "MYMY %1! MYMY ", @myvar

-- error 300006 doesn't exist in the sysusermessages table.
-- so we call it without a comma after the error number and specify the message

raiserror 300006 "MYMY %1! MYMY", @myvar

Empty strings:

select @country = isnull(rtrim(ltrim(@country)),"")

note that empty chars are expanded inside the char_length() function.
Thus
char_length(""+"") = 2

A workaround this is to rtrim everything inside the char_length - and use isnull outside the char_length:
isnull(rtrim(@nm_lst)),0)

Note here that rtim("") returns null, so char_length(rtrim("")) is null also, and not zero.
Isn't Sybase great, intuitive and easy to understand !!!

Replacing in a string:
select @nm_frst = str_replace(@nm_frst, "'", "''")

Example of Printing:

exec myprint @mydebug,63,"starting"
if (@mydebug = 1) print "We limit the rowcount globally to %1! rows",@set_glob_rowcount

-

create procedure myprint (
@myflag int = 1, -- 1 or 0 - to print or not
@myline int = 0, -- the line number
@mystring varchar(200) = "" -- the message string
) as
begin
----------------------------------
-- usage
-- myprint 1,57,"some message"
-- or in stored proc:
-- myprint @mydebug, 57,"some message"
----------------------------------
if (@myflag != 1)
return
declare @mydate varchar(30)
select @mydate=convert(varchar(30),getdate(),109)
print "%1! === line ~ %2! === %3!", @mydate, @myline, @mystring
end

Example of reporting a duration of some process:

declare @mysec datetime
select @mysec = getdate()

-- do something

if(@mydebug=1)
begin
select @ss = "FINISHED, total run time = " + convert(varchar(20),datediff(ss,@mysec,getdate())) + " seconds"
exec myprint @mydebug,1012,@ss
end

Example populating a "suffix" table (chopping words from the beginning until nothing left). This table is used as an index table to speed up search of words/phrases inside the string. It allows to use "starts with" logic ( ... where ss like 'xxx%' ), which uses index - thus it is fast.

CREATE PROCEDURE dbo.ins_words (
@words varchar(100) = null,
@id varchar(32) = null,
) as
BEGIN

-- clean the @words
select @words = upper(rtrim(ltrim(@words)))
select @words = str_replace(@words, char(10), " ") -- \n
select @words = str_replace(@words, char(9), " ") -- \t
select @words = str_replace(@words, "|", " ") -- we do this to be able to bcp
select @words = str_replace(@words, "-", " ") -- dash is also a separator
while (charindex(' ',@words)>0)
select @words = str_replace(@words, " ", " ") -- remove double spaces

-- we insert the whole string (even if it is empty - we still want to enter the id to make future updates easier)
insert mytab (words, id) values (@words, @id_cas)
if (rtrim(@words) = null)
return

declare @pos int
select @pos = charindex(' ', @words)
while (@pos > 0) -- have to insert substring(s)
begin
insert mytab (words,id) values (substring(@words,(@pos+1),200), @id)
select @words = substring(@words,(@pos+1),200)
select @pos = charindex(' ', @words)
end

END

Update with variables technique:

-- Example 1
declare @mycount int select @mycount=0
update jds_contact set @mycount = @mycount + 1
select @mycount

-- We can do more interesting things - for example, concatenate values for the same id.
let's say we have 2 columns (id, name).
first, we create a temporary work table with 4 columns:
(id, id_prev, name, names)

This table should have only fixed-width fields (no varchar). This is necessary so that the table will be written to the same place - thus preserving th order of rows.
Next we insert data into this table (order by id).
Next we update the id_prev field:

select @prev = 0
update #work set id_prev = @prev, @prev = id

Next we do concatenation:

select @ss=""
update #work
set names = case when id != id_prev then rtrim(name) else rtrim(@names)+', '+rtrim(name) end,
@ss = case when id != id_prev then rtrim(name) else rtrim(@names)+', '+rtrim(name) end

Finaly select the result:

select id, max(names) from #work

Example of batch inserting using id:

declare @myrowcount int select @myrowcount=1
declare @min_id int, @max_id int, @base_id int

select @min_id = min(id), @max_id = max(id) from source
select @base_id = @min_id - 1
select * into #batch from dest where 1=2

set rowcount 5000

while (@myrowcount > 0)

begin
delete #batch

insert #batch
select * from source t
where t.id > @base_id
order by t.id

select @myrowcount = @@rowcount

select @base_id = max(id) from #batch

if (@myrowcount > 0)
begin
insert dest select * from #batch
print "inserted = %1! (max = %2!)", @base_id, @max_id
end

end

set rowcount 0

Reducing the locking problem.

Read - put shared lock, update or insert - exclusive lock.

Table locks may be a result of lock promotion, but more often they happen because of your SQL / indexes. Normally table read puts a shared lock. But if you have an UPDATE or a DELETE with a WHERE clause which has a table scan - you will get an exclusive table lock without any lock promotion, regardless of locking scheme.

Switching to row level locking (DATAROWS locking scheme) may not solve your problem and is very expensive: your data takes up more space, UPDATES start causing a lot more fragmentation, and you need A LOT more locks. Lock promotion works by number of locks used, and since DATAROWS uses considerably more locks, you'll hit promotion threshold much sooner. Also if you run out of locks – you will also get a promotion to a table lock. Also, if you are committing more rows at a time than the HWM (high lock promotion threshold) – then you will still get lock promotion even with datarows locking scheme.

There are 3 locking schemes:
- ALLPAGES (data and indexes)
- DATAPAGES (only data)
- DATAROWS (only data, row-level locking)

Use sp_lock and sp_objectstats to see if multiple connections/users attempt locks on rows on the same page. Use sp_sysmon to detect lock promotion.

Lock promotion thresholds :
LWM = 200 - never attempt a promotion below 200 locks
HWM = 200 - always attempt promotion once 200 are reached
(HLW - High WaterMark, LWM - Low WaterMark)
Number of locks = 35,000

Note: Switching to DATAROWS might help your blocking, but maybe the problem isn't that people are trying to read the same row, or even different rows on the same page. It might be that they're blocking on data vs index access. DATAPAGES is a good first step to try since it eliminates index locking and usually this is all you need. It has most of the benefits of DATAROWS without the extreme overhead. It's always a good idea to try DATAPAGES first.

Also, the solution to avoid table locks may be simply to increase the lock promotion thresholds.

Note: changing from ALLPAGES to DATAPAGES or DATAROWS is an expensive operation. Switching between DATAPAGES and DATAROWS is very easy and fast.

Note: if a WHERE clause has local variables – optimizer will not use index, thus causing a table scan.

Example:Modifying locking scheme:

alter table authors lock datarows

Isolation Level

You can change isolation level using a command like this:

         set transaction isolation level 0

The value is stored in variable @@isolation

Possible values:
    0 - read uncommitted changes to data (allow dirty reads).
    1 - read committed only (default)
    2 - repeatable read - The transaction can repeat the same query, and no rows that have been read by the transaction will have been updated or deleted.
    3 - serializable read - The transaction can repeat the same query, and receive exactly the same results. No rows can be inserted that would appear in the result set.

You can override the isolation level by adding a hint to the end of SQL statement:

with holdlock - force level 3
with noholdlock - force level 1

AutoCommit

set autocommit - the client-side commant to set autocommit on | off | 0 | 1 - turns autocommit on or off.

Duplicates

ignore_dup_key - hint to skip duplicate keys while doing insert or update. Note - you cannot create a unique index on a column that includes duplicate values or more than one null value, whether or not ignore_dup_key is set. If you attempt to do so, Adaptive Server prints an error message that displays the first of the duplicate values. You must eliminate duplicates before Adaptive Server can create a unique index on the column.

ignore_dup_row - allows you to create a new, nonunique clustered index on a table that includes duplicate rows. ignore_dup_row deletes the duplicate rows from the table, and cancels any insert or update that would create a duplicate row, but does not roll back the entire transaction.

Example: to delete duplicate rows:
        create index index_name on table_name with ignore_dup_row

allow_dup_row - allows you to create a nonunique clustered index on a table that includes duplicate rows, and allows you to duplicate rows with update and insert statements.

tempdb, transaction log,

You may have many databases on the same server. But until ver. 12.* there was only 1 tempdb database. This was creating problems, as many sessions were competing for it. Starting with ver. 12 you may have several database - thus distributing the load.

Usuallythe size of tempdb is ~2 GBytes, however I've seen it as big as 12.5 GBytes - and completely buffered (mapped to memory).

When you have large transactions - you may want to increase the sizes of both tempdb and transaction log. But if you have replication - you want to limit the size of transactions to ~ 3,000 rows.

Checkpoint event

Approximately once a minute, the checkpoint task checks each database on the server to see how many records have been added to the transaction log since the last checkpoint. If the server estimates that the time required to recover these transactions is greater than the database’s recovery interval, Adaptive Server issues a checkpoint. The modified pages are written from cache onto the database devices, and the checkpoint event is recorded in the transaction log. Then, the checkpoint task “sleeps” for another minute. To see the checkpoint task, execute sp_who. The checkpoint task is usually displayed as “CHECKPOINT SLEEP” in the “cmd” column.

Working with big tables

don't create a clustered index, because when you will need to drop and create it - ASE will lock the whole table.
use range partitions to load data (starting with ver 15), and use local indexes (index on a partition) instead of global indexes.

xxx

xxxxxx

xxx

xxxxxx

xxx

xxxxxx