Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Comment: | Merged uuid-to-hash branch down, causing all public interfaces except for those exceptions now documented in www/hashes.md to use something other than "UUID" to mean "artifact hash" or one of its more specific derivative terms. (e.g. Commit ID) |
---|---|
Downloads: | Tarball | ZIP archive | SQL archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA3-256: |
8ad5e4690854a81a4666fa67a6b2d4b0 |
User & Date: | wyoung 2020-05-28 19:52:15 |
2020-05-29
| ||
10:32 | Remove an incorrect foreign key from the mlink table. Many of the other foreign keys are syntactically correct, but Fossil uses numeric 0 instead of NULL to mean "no reference", which is semantically wrong. We should try to fix that at some point, perhaps. Or enhance SQLite so that it is able to interpret 0 values on a FK reference to an INTEGER PRIMARY KEY as if it were a NULL, as an option. Maybe. ... (check-in: 1f5af800 user: drh tags: trunk) | |
08:05 | Move default_css.txt to default.css, treat it like a builtin file, and remove mkcss, as the recent style.css reorg obviates the need for mkcss. ... (check-in: 0c19cd0a user: stephan tags: default.css) | |
2020-05-28
| ||
19:52 | Merged uuid-to-hash branch down, causing all public interfaces except for those exceptions now documented in www/hashes.md to use something other than "UUID" to mean "artifact hash" or one of its more specific derivative terms. (e.g. Commit ID) ... (check-in: 8ad5e469 user: wyoung tags: trunk) | |
19:47 | Second-pass edit on www/hashes.md: more definite stances on things now that we have a ruling on the debate, and less flagellation all around. ... (check-in: 3d808c4d user: wyoung tags: trunk) | |
2020-05-27
| ||
22:14 | Updated all user-facing documentation and "fossil help" output (plus select internal comments and function names) to use "hash" rather than "UUID". No functional changes. (Yet?) See forum thread https://www.fossil-scm.org/forum/forumpost/ddc14c6866 for discussion. ... (Closed-Leaf check-in: df520195 user: wyoung tags: uuid-to-hash) | |
Changes to skins/blitz/ticket.txt.
1 2 | <h4>$<title></h4> <table class="tktDsp"> | | | 1 2 3 4 5 6 7 8 9 10 | <h4>$<title></h4> <table class="tktDsp"> <tr><td class="tktDspLabel">Ticket Hash</td> <th1> if {[info exists tkt_uuid]} { if {[hascap s]} { html "<td class='tktDspValue' colspan='3'>$tkt_uuid " html "($tkt_id)</td></tr>\n" } else { html "<td class='tktDspValue' colspan='3'>$tkt_uuid</td></tr>\n" |
︙ | ︙ |
Changes to skins/blitz_no_logo/ticket.txt.
1 2 | <h4>$<title></h4> <table class="tktDsp"> | | | 1 2 3 4 5 6 7 8 9 10 | <h4>$<title></h4> <table class="tktDsp"> <tr><td class="tktDspLabel">Ticket Hash</td> <th1> if {[info exists tkt_uuid]} { if {[hascap s]} { html "<td class='tktDspValue' colspan='3'>$tkt_uuid " html "($tkt_id)</td></tr>\n" } else { html "<td class='tktDspValue' colspan='3'>$tkt_uuid</td></tr>\n" |
︙ | ︙ |
Changes to src/attach.c.
︙ | ︙ | |||
21 22 23 24 25 26 27 | #include "attach.h" #include <assert.h> /* ** WEBPAGE: attachlist ** List attachments. ** | | > | > | | > | | | | 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | #include "attach.h" #include <assert.h> /* ** WEBPAGE: attachlist ** List attachments. ** ** tkt=HASH ** page=WIKIPAGE ** technote=HASH ** ** At most one of technote=, tkt= or page= may be supplied. ** ** If none are given, all attachments are listed. If one is given, only ** attachments for the designated technote, ticket or wiki page are shown. ** ** HASH may be just a prefix of the relevant technical note or ticket ** artifact hash, in which case all attachments of all technical notes or ** tickets with the prefix will be listed. */ void attachlist_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); const char *zTechNote = P("technote"); Blob sql; Stmt q; |
︙ | ︙ | |||
150 151 152 153 154 155 156 157 158 | /* ** WEBPAGE: attachdownload ** WEBPAGE: attachimage ** WEBPAGE: attachview ** ** Download or display an attachment. ** Query parameters: ** | > | | | 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | /* ** WEBPAGE: attachdownload ** WEBPAGE: attachimage ** WEBPAGE: attachview ** ** Download or display an attachment. ** ** Query parameters: ** ** tkt=HASH ** page=WIKIPAGE ** technote=HASH ** file=FILENAME ** attachid=ID ** */ void attachview_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); |
︙ | ︙ | |||
248 249 250 251 252 253 254 | /* ** Commit a new attachment into the repository */ void attach_commit( const char *zName, /* The filename of the attachment */ | | | 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | /* ** Commit a new attachment into the repository */ void attach_commit( const char *zName, /* The filename of the attachment */ const char *zTarget, /* The artifact hash to attach to */ const char *aContent, /* The content of the attachment */ int szContent, /* The length of the attachment */ int needModerator, /* Moderate the attachment? */ const char *zComment /* The comment for the attachment */ ){ Blob content; Blob manifest; |
︙ | ︙ | |||
303 304 305 306 307 308 309 | db_end_transaction(0); } /* ** WEBPAGE: attachadd ** Add a new attachment. ** | | | | 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 | db_end_transaction(0); } /* ** WEBPAGE: attachadd ** Add a new attachment. ** ** tkt=HASH ** page=WIKIPAGE ** technote=HASH ** from=URL ** */ void attachadd_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); const char *zTechNote = P("technote"); |
︙ | ︙ | |||
418 419 420 421 422 423 424 | ** ** Show the details of an attachment artifact. */ void ainfo_page(void){ int rid; /* RID for the control artifact */ int ridSrc; /* RID for the attached file */ char *zDate; /* Date attached */ | | | | 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | ** ** Show the details of an attachment artifact. */ void ainfo_page(void){ int rid; /* RID for the control artifact */ int ridSrc; /* RID for the attached file */ char *zDate; /* Date attached */ const char *zUuid; /* Hash of the control artifact */ Manifest *pAttach; /* Parse of the control artifact */ const char *zTarget; /* Wiki, ticket or tech note attached to */ const char *zSrc; /* Hash of the attached file */ const char *zName; /* Name of the attached file */ const char *zDesc; /* Description of the attached file */ const char *zWikiName = 0; /* Wiki page name when attached to Wiki */ const char *zTNUuid = 0; /* Tech Note ID when attached to tech note */ const char *zTktUuid = 0; /* Ticket ID when attached to a ticket */ int modPending; /* True if awaiting moderation */ const char *zModAction; /* Moderation action or NULL */ |
︙ | ︙ | |||
761 762 763 764 765 766 767 | ); zFile = g.argv[3]; } blob_read_from_file(&content, zFile, ExtFILE); user_select(); attach_commit( zFile, /* The filename of the attachment */ | | | 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 | ); zFile = g.argv[3]; } blob_read_from_file(&content, zFile, ExtFILE); user_select(); attach_commit( zFile, /* The filename of the attachment */ zTarget, /* The artifact hash to attach to */ blob_buffer(&content), /* The content of the attachment */ blob_size(&content), /* The length of the attachment */ 0, /* No need to moderate the attachment */ "" /* Empty attachment comment */ ); if( !zETime ){ fossil_print("Attached %s to wiki page %s.\n", zFile, zPageName); |
︙ | ︙ |
Changes to src/bundle.c.
︙ | ︙ | |||
528 529 530 531 532 533 534 | fossil_fatal("incorrect hash for artifact %b", &h1); } blob_reset(&h1); bag_remove(&busy, blobid); db_finalize(&q); } | | | | 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 | fossil_fatal("incorrect hash for artifact %b", &h1); } blob_reset(&h1); bag_remove(&busy, blobid); db_finalize(&q); } /* fossil bundle cat BUNDLE HASH... ** ** Write elements of a bundle on standard output */ static void bundle_cat_cmd(void){ int i; Blob x; verify_all_options(); if( g.argc<5 ) usage("cat BUNDLE HASH..."); bundle_attach_file(g.argv[3], "b1", 1); blob_zero(&x); for(i=4; i<g.argc; i++){ int blobid = db_int(0,"SELECT blobid FROM bblob WHERE uuid LIKE '%q%%'", g.argv[i]); if( blobid==0 ){ fossil_fatal("no such artifact in bundle: %s", g.argv[i]); |
︙ | ︙ | |||
723 724 725 726 727 728 729 | ** Usage: %fossil bundle SUBCOMMAND ARGS... ** ** fossil bundle append BUNDLE FILE... ** ** Add files named on the command line to BUNDLE. This subcommand has ** little practical use and is mostly intended for testing. ** | | | 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 | ** Usage: %fossil bundle SUBCOMMAND ARGS... ** ** fossil bundle append BUNDLE FILE... ** ** Add files named on the command line to BUNDLE. This subcommand has ** little practical use and is mostly intended for testing. ** ** fossil bundle cat BUNDLE HASH... ** ** Extract one or more artifacts from the bundle and write them ** consecutively on standard output. This subcommand was designed ** for testing and introspection of bundles and is not something ** commonly used. ** ** fossil bundle export BUNDLE ?OPTIONS? |
︙ | ︙ | |||
768 769 770 771 772 773 774 | ** ** Remove from the repository all files that are used exclusively ** by check-ins in BUNDLE. This has the effect of undoing a ** "fossil bundle import". ** ** SUMMARY: ** fossil bundle append BUNDLE FILE... Add files to BUNDLE | | | 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 | ** ** Remove from the repository all files that are used exclusively ** by check-ins in BUNDLE. This has the effect of undoing a ** "fossil bundle import". ** ** SUMMARY: ** fossil bundle append BUNDLE FILE... Add files to BUNDLE ** fossil bundle cat BUNDLE HASH... Extract file from BUNDLE ** fossil bundle export BUNDLE ?OPTIONS? Create a new BUNDLE ** --branch BRANCH --from TAG1 --to TAG2 Check-ins to include ** --checkin TAG Use only check-in TAG ** --standalone Omit dependencies ** fossil bundle extend BUNDLE Update with newer content ** fossil bundle import BUNDLE ?OPTIONS? Import a bundle ** --publish Publish the import |
︙ | ︙ |
Changes to src/checkin.c.
︙ | ︙ | |||
1422 1423 1424 1425 1426 1427 1428 | /* ** Make sure the current check-in with timestamp zDate is younger than its ** ancestor identified rid and zUuid. Throw a fatal error if not. */ static void checkin_verify_younger( int rid, /* The record ID of the ancestor */ | | | 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 | /* ** Make sure the current check-in with timestamp zDate is younger than its ** ancestor identified rid and zUuid. Throw a fatal error if not. */ static void checkin_verify_younger( int rid, /* The record ID of the ancestor */ const char *zUuid, /* The artifact hash of the ancestor */ const char *zDate /* Date & time of the current check-in */ ){ #ifndef FOSSIL_ALLOW_OUT_OF_ORDER_DATES if(checkin_is_younger(rid,zDate)==0){ fossil_fatal("ancestor check-in [%S] (%s) is not older (clock skew?)" " Use --allow-older to override.", zUuid, zDate); } |
︙ | ︙ | |||
1498 1499 1500 1501 1502 1503 1504 | #endif /* INTERFACE */ /* ** Create a manifest. */ static void create_manifest( Blob *pOut, /* Write the manifest here */ | | | | 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 | #endif /* INTERFACE */ /* ** Create a manifest. */ static void create_manifest( Blob *pOut, /* Write the manifest here */ const char *zBaselineUuid, /* Hash of baseline, or zero */ Manifest *pBaseline, /* Make it a delta manifest if not zero */ int vid, /* BLOB.id for the parent check-in */ CheckinInfo *p, /* Information about the check-in */ int *pnFBcard /* OUT: Number of generated B- and F-cards */ ){ char *zDate; /* Date of the check-in */ char *zParentUuid = 0; /* Hash of parent check-in */ Blob filename; /* A single filename */ int nBasename; /* Size of base filename */ Stmt q; /* Various queries */ Blob mcksum; /* Manifest checksum */ ManifestFile *pFile; /* File from the baseline */ int nFBcard = 0; /* Number of B-cards and F-cards */ int i; /* Loop counter */ |
︙ | ︙ | |||
2056 2057 2058 2059 2060 2061 2062 | int hasChanges; /* True if unsaved changes exist */ int vid; /* blob-id of parent version */ int nrid; /* blob-id of a modified file */ int nvid; /* Blob-id of the new check-in */ Blob comment; /* Check-in comment */ const char *zComment; /* Check-in comment */ Stmt q; /* Various queries */ | | | 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 | int hasChanges; /* True if unsaved changes exist */ int vid; /* blob-id of parent version */ int nrid; /* blob-id of a modified file */ int nvid; /* Blob-id of the new check-in */ Blob comment; /* Check-in comment */ const char *zComment; /* Check-in comment */ Stmt q; /* Various queries */ char *zUuid; /* Hash of the new check-in */ int useHash = 0; /* True to verify file status using hashing */ int noSign = 0; /* True to omit signing the manifest using GPG */ int privateFlag = 0; /* True if the --private option is present */ int privateParent = 0; /* True if the parent check-in is private */ int isAMerge = 0; /* True if checking in a merge */ int noWarningFlag = 0; /* True if skipping all warnings */ int noPrompt = 0; /* True if skipping all prompts */ |
︙ | ︙ |
Changes to src/checkout.c.
︙ | ︙ | |||
85 86 87 88 89 90 91 | ); fossil_free(zPwd); db_multi_exec("DELETE FROM vfile WHERE vid=%d", vid); } /* | | | 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ); fossil_free(zPwd); db_multi_exec("DELETE FROM vfile WHERE vid=%d", vid); } /* ** Given the abbreviated hash of a version, load the content of that ** version in the VFILE table. Return the VID for the version. ** ** If anything goes wrong, panic. */ int load_vfile(const char *zName, int forceMissingFlag){ Blob uuid; int vid; |
︙ | ︙ |
Changes to src/content.c.
︙ | ︙ | |||
562 563 564 565 566 567 568 | ** have no data with which to dephantomize it. In either case, ** there is nothing for us to do other than return the RID. */ db_finalize(&s1); db_end_transaction(0); return rid; } }else{ | | | 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 | ** have no data with which to dephantomize it. In either case, ** there is nothing for us to do other than return the RID. */ db_finalize(&s1); db_end_transaction(0); return rid; } }else{ rid = 0; /* No entry with the same hash currently exists */ markAsUnclustered = 1; } db_finalize(&s1); /* Construct a received-from ID if we do not already have one */ content_rcvid_init(0); |
︙ | ︙ | |||
657 658 659 660 661 662 663 | */ int content_put(Blob *pBlob){ return content_put_ex(pBlob, 0, 0, 0, 0); } /* | | | 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 | */ int content_put(Blob *pBlob){ return content_put_ex(pBlob, 0, 0, 0, 0); } /* ** Create a new phantom with the given hash and return its artifact ID. */ int content_new(const char *zUuid, int isPrivate){ int rid; static Stmt s1, s2, s3; assert( g.repositoryOpen ); db_begin_transaction(); |
︙ | ︙ | |||
1108 1109 1110 1111 1112 1113 1114 | } /* Allowed flags for check_exists */ #define MISSING_SHUNNED 0x0001 /* Do not report shunned artifacts */ /* This is a helper routine for test-artifacts. ** | | | > | | 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 | } /* Allowed flags for check_exists */ #define MISSING_SHUNNED 0x0001 /* Do not report shunned artifacts */ /* This is a helper routine for test-artifacts. ** ** Check to see that the artifact hash referenced by zUuid exists in the ** repository. If it does, return 0. If it does not, generate an error ** message and return 1. */ static int check_exists( const char *zUuid, /* Hash of the artifact we are checking for */ unsigned flags, /* Flags */ Manifest *p, /* The control artifact that references zUuid */ const char *zRole, /* Role of zUuid in p */ const char *zDetail /* Additional information, such as a filename */ ){ static Stmt q; int rc = 0; |
︙ | ︙ |
Changes to src/event.c.
︙ | ︙ | |||
61 62 63 64 65 66 67 | ** v=BOOLEAN Show details if TRUE. Default is FALSE. Optional. ** ** Display an existing tech-note identified by its ID, optionally at a ** specific version, and optionally with additional details. */ void event_page(void){ int rid = 0; /* rid of the event artifact */ | | | 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | ** v=BOOLEAN Show details if TRUE. Default is FALSE. Optional. ** ** Display an existing tech-note identified by its ID, optionally at a ** specific version, and optionally with additional details. */ void event_page(void){ int rid = 0; /* rid of the event artifact */ char *zUuid; /* artifact hash corresponding to rid */ const char *zId; /* Event identifier */ const char *zVerbose; /* Value of verbose option */ char *zETime; /* Time of the tech-note */ char *zATime; /* Time the artifact was created */ int specRid; /* rid specified by aid= parameter */ int prevRid, nextRid; /* Previous or next edits of this tech-note */ Manifest *pTNote; /* Parsed technote artifact */ |
︙ | ︙ |
Changes to src/finfo.c.
︙ | ︙ | |||
282 283 284 285 286 287 288 | ** ** a=DATETIME Only show changes after DATETIME ** b=DATETIME Only show changes before DATETIME ** m=HASH Mark this particular file version ** n=NUM Show the first NUM changes only ** brbg Background color by branch name ** ubg Background color by user name | | | | 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | ** ** a=DATETIME Only show changes after DATETIME ** b=DATETIME Only show changes before DATETIME ** m=HASH Mark this particular file version ** n=NUM Show the first NUM changes only ** brbg Background color by branch name ** ubg Background color by user name ** ci=HASH Ancestors of a particular check-in ** orig=HASH If both ci and orig are supplied, only show those ** changes on a direct path from orig to ci. ** showid Show RID values for debugging ** ** DATETIME may be "now" or "YYYY-MM-DDTHH:MM:SS.SSS". If in ** year-month-day form, it may be truncated, and it may also name a ** timezone offset from UTC as "-HH:MM" (westward) or "+HH:MM" ** (eastward). Either no timezone suffix or "Z" means UTC. |
︙ | ︙ | |||
369 370 371 372 373 374 375 | blob_append_sql(&sql, "SELECT" " datetime(min(event.mtime),toLocal())," /* Date of change */ " coalesce(event.ecomment, event.comment)," /* Check-in comment */ " coalesce(event.euser, event.user)," /* User who made chng */ " mlink.pid," /* Parent file rid */ " mlink.fid," /* File rid */ | | | | | 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 | blob_append_sql(&sql, "SELECT" " datetime(min(event.mtime),toLocal())," /* Date of change */ " coalesce(event.ecomment, event.comment)," /* Check-in comment */ " coalesce(event.euser, event.user)," /* User who made chng */ " mlink.pid," /* Parent file rid */ " mlink.fid," /* File rid */ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," /* Parent file hash */ " blob.uuid," /* Current file hash */ " (SELECT uuid FROM blob WHERE rid=mlink.mid)," /* Check-in hash */ " event.bgcolor," /* Background color */ " (SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0" " AND tagxref.rid=mlink.mid)," /* Branchname */ " mlink.mid," /* check-in ID */ " mlink.pfnid," /* Previous filename */ " blob.size" /* File size */ " FROM mlink, event, blob" |
︙ | ︙ | |||
539 540 541 542 543 544 545 | @ <td class="timeline%s(zStyle)Cell"> } if( tmFlags & TIMELINE_COMPACT ){ @ <span class='timelineCompactComment' data-id='%d(frid)'> }else{ @ <span class='timeline%s(zStyle)Comment'> if( (tmFlags & TIMELINE_VERBOSE)!=0 && zUuid ){ | | | | 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 | @ <td class="timeline%s(zStyle)Cell"> } if( tmFlags & TIMELINE_COMPACT ){ @ <span class='timelineCompactComment' data-id='%d(frid)'> }else{ @ <span class='timeline%s(zStyle)Comment'> if( (tmFlags & TIMELINE_VERBOSE)!=0 && zUuid ){ hyperlink_to_version(zUuid); @ part of check-in \ hyperlink_to_version(zCkin); } } @ %W(zCom)</span> if( (tmFlags & TIMELINE_COMPACT)!=0 ){ @ <span class='timelineEllipsis' data-id='%d(frid)' \ @ id='ellipsis-%d(frid)'>...</span> @ <span class='clutter timelineCompactDetail' |
︙ | ︙ | |||
574 575 576 577 578 579 580 | @ id: %d(frid)←%d(srcId) }else{ @ id: %d(frid) } } } @ check-in: \ | | | 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 | @ id: %d(frid)←%d(srcId) }else{ @ id: %d(frid) } } } @ check-in: \ hyperlink_to_version(zCkin); if( fShowId ){ @ (%d(fmid)) } @ user: \ hyperlink_to_user(zUser, zDate, ","); @ branch: %z(href("%R/timeline?t=%T",zBr))%h(zBr)</a>, if( tmFlags & (TIMELINE_COMPACT|TIMELINE_VERBOSE) ){ |
︙ | ︙ |
Changes to src/import.c.
︙ | ︙ | |||
24 25 26 27 28 29 30 | #if INTERFACE /* ** A single file change record. */ struct ImportFile { char *zName; /* Name of a file */ | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | #if INTERFACE /* ** A single file change record. */ struct ImportFile { char *zName; /* Name of a file */ char *zUuid; /* Hash of the file */ char *zPrior; /* Prior name if the name was changed */ char isFrom; /* True if obtained from the parent */ char isExe; /* True if executable */ char isLink; /* True if symlink */ }; #endif |
︙ | ︙ | |||
57 58 59 60 61 62 63 | char *zBranch; /* Name of a branch for a commit */ char *zPrevBranch; /* The branch of the previous check-in */ char *aData; /* Data content */ char *zMark; /* The current mark */ char *zDate; /* Date/time stamp */ char *zUser; /* User name */ char *zComment; /* Comment of a commit */ | | | 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | char *zBranch; /* Name of a branch for a commit */ char *zPrevBranch; /* The branch of the previous check-in */ char *aData; /* Data content */ char *zMark; /* The current mark */ char *zDate; /* Date/time stamp */ char *zUser; /* User name */ char *zComment; /* Comment of a commit */ char *zFrom; /* from value as a hash */ char *zPrevCheckin; /* Name of the previous check-in */ char *zFromMark; /* The mark of the "from" field */ int nMerge; /* Number of merge values */ int nMergeAlloc; /* Number of slots in azMerge[] */ char **azMerge; /* Merge values */ int nFile; /* Number of aFile values */ int nFileAlloc; /* Number of slots in aFile[] */ |
︙ | ︙ | |||
141 142 143 144 145 146 147 | } /* ** Insert an artifact into the BLOB table if it isn't there already. ** If zMark is not zero, create a cross-reference from that mark back ** to the newly inserted artifact. ** | | | | | 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | } /* ** Insert an artifact into the BLOB table if it isn't there already. ** If zMark is not zero, create a cross-reference from that mark back ** to the newly inserted artifact. ** ** If saveHash is true, then pContent is a commit record. Record its ** artifact hash in gg.zPrevCheckin. */ static int fast_insert_content( Blob *pContent, /* Content to insert */ const char *zMark, /* Label using this mark, if not NULL */ int saveHash, /* Save artifact hash in gg.zPrevCheckin */ int doParse /* Invoke manifest_crosslink() */ ){ Blob hash; Blob cmpr; int rid; hname_hash(pContent, 0, &hash); |
︙ | ︙ | |||
185 186 187 188 189 190 191 | ); db_multi_exec( "INSERT OR IGNORE INTO xmark(tname, trid, tuuid)" "VALUES(%B,%d,%B)", &hash, rid, &hash ); } | | | 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | ); db_multi_exec( "INSERT OR IGNORE INTO xmark(tname, trid, tuuid)" "VALUES(%B,%d,%B)", &hash, rid, &hash ); } if( saveHash ){ fossil_free(gg.zPrevCheckin); gg.zPrevCheckin = fossil_strdup(blob_str(&hash)); } blob_reset(&hash); return rid; } |
︙ | ︙ | |||
409 410 411 412 413 414 415 | }else{ *pzIn = &z[i]; } return z; } /* | | | 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | }else{ *pzIn = &z[i]; } return z; } /* ** Convert a "mark" or "committish" into the artifact hash. */ static char *resolve_committish(const char *zCommittish){ char *zRes; zRes = db_text(0, "SELECT tuuid FROM xmark WHERE tname=%Q", zCommittish); return zRes; } |
︙ | ︙ | |||
1826 1827 1828 1829 1830 1831 1832 | Bag blobs, vers; bag_init(&blobs); bag_init(&vers); /* The following temp-tables are used to hold information needed for ** the import. ** ** The XMARK table provides a mapping from fast-import "marks" and symbols | | > | | 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 | Bag blobs, vers; bag_init(&blobs); bag_init(&vers); /* The following temp-tables are used to hold information needed for ** the import. ** ** The XMARK table provides a mapping from fast-import "marks" and symbols ** into artifact hashes. ** ** Given any valid fast-import symbol, the corresponding fossil rid and ** hash can found by searching against the xmark.tname field. ** ** The XBRANCH table maps commit marks and symbols into the branch those ** commits belong to. If xbranch.tname is a fast-import symbol for a ** check-in then xbranch.brnm is the branch that check-in is part of. ** ** The XTAG table records information about tags that need to be applied ** to various branches after the import finishes. The xtag.tcontent field |
︙ | ︙ |
Changes to src/info.c.
︙ | ︙ | |||
42 43 44 45 46 47 48 | return zTags; } /* ** Print common information about a particular record. ** | | | | | 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | return zTags; } /* ** Print common information about a particular record. ** ** * The artifact hash ** * The record ID ** * mtime and ctime ** * who signed it ** */ void show_common_info( int rid, /* The rid for the check-in to display info for */ const char *zRecDesc, /* Brief record description; e.g. "checkout:" */ int showComment, /* True to show the check-in comment */ int showFamily /* True to show parents and children */ ){ Stmt q; char *zComment = 0; char *zTags; char *zDate; char *zUuid; zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); if( zUuid ){ zDate = db_text(0, "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", rid ); /* 01234567890123 */ fossil_print("%-13s %.40s %s\n", zRecDesc, zUuid, zDate ? zDate : ""); free(zDate); if( showComment ){ zComment = db_text(0, "SELECT coalesce(ecomment,comment) || " " ' (user: ' || coalesce(euser,user,'?') || ')' " " FROM event WHERE objid=%d", rid |
︙ | ︙ | |||
262 263 264 265 266 267 268 | } }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ fossil_fatal("no such object: %s", g.argv[2]); } | | | 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 | } }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ fossil_fatal("no such object: %s", g.argv[2]); } show_common_info(rid, "hash:", 1, 1); } } /* ** Show the context graph (immediate parents and children) for ** check-in rid. */ |
︙ | ︙ | |||
534 535 536 537 538 539 540 | @ <span class="infoTag">%h(zTagname)=%h(zValue)</span> }else { @ <span class="infoTag">%h(zTagname)</span> } if( tagtype==2 ){ if( zOrigUuid && zOrigUuid[0] ){ @ inherited from | | | | 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 | @ <span class="infoTag">%h(zTagname)=%h(zValue)</span> }else { @ <span class="infoTag">%h(zTagname)</span> } if( tagtype==2 ){ if( zOrigUuid && zOrigUuid[0] ){ @ inherited from hyperlink_to_version(zOrigUuid); }else{ @ propagates to descendants } } if( zSrcUuid && zSrcUuid[0] ){ if( tagtype==0 ){ @ by }else{ @ added by } hyperlink_to_version(zSrcUuid); @ on hyperlink_to_date(zDate,0); } @ </li> } db_finalize(&q); if( cnt ){ |
︙ | ︙ | |||
611 612 613 614 615 616 617 | void ci_page(void){ Stmt q1, q2, q3; int rid; int isLeaf; int diffType; /* 0: no diff, 1: unified, 2: side-by-side */ u64 diffFlags; /* Flag parameter for text_diff() */ const char *zName; /* Name of the check-in to be displayed */ | | | | 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 | void ci_page(void){ Stmt q1, q2, q3; int rid; int isLeaf; int diffType; /* 0: no diff, 1: unified, 2: side-by-side */ u64 diffFlags; /* Flag parameter for text_diff() */ const char *zName; /* Name of the check-in to be displayed */ const char *zUuid; /* Hash of zName, found via blob.uuid */ const char *zParent; /* Hash of the parent check-in (if any) */ const char *zRe; /* regex parameter */ ReCompiled *pRe = 0; /* regex */ const char *zW; /* URL param for ignoring whitespace */ const char *zPage = "vinfo"; /* Page that shows diffs */ const char *zPageHide = "ci"; /* Page that hides diffs */ const char *zBrName; /* Branch name */ |
︙ | ︙ | |||
937 938 939 940 941 942 943 | append_diff_javascript(diffType==2); cookie_render(); style_footer(); } /* ** WEBPAGE: winfo | | | 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 | append_diff_javascript(diffType==2); cookie_render(); style_footer(); } /* ** WEBPAGE: winfo ** URL: /winfo?name=HASH ** ** Display information about a wiki page. */ void winfo_page(void){ int rid; Manifest *pWiki; char *zUuid; |
︙ | ︙ | |||
1098 1099 1100 1101 1102 1103 1104 | const char *zUuid = db_column_text(&q, 3); const char *zTagList = db_column_text(&q, 4); Blob comment; int wikiFlags = WIKI_INLINE|WIKI_NOBADLINKS; if( db_get_boolean("timeline-block-markup", 0)==0 ){ wikiFlags |= WIKI_NOBLOCK; } | | | 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 | const char *zUuid = db_column_text(&q, 3); const char *zTagList = db_column_text(&q, 4); Blob comment; int wikiFlags = WIKI_INLINE|WIKI_NOBADLINKS; if( db_get_boolean("timeline-block-markup", 0)==0 ){ wikiFlags |= WIKI_NOBLOCK; } hyperlink_to_version(zUuid); blob_zero(&comment); db_column_blob(&q, 2, &comment); wiki_convert(&comment, 0, wikiFlags); blob_reset(&comment); @ (user: hyperlink_to_user(zUser,zDate,","); if( zTagList && zTagList[0] && g.perm.Hyperlink ){ |
︙ | ︙ | |||
1426 1427 1428 1429 1430 1431 1432 | } prevName = fossil_strdup(zName); } if( showDetail ){ @ <li> hyperlink_to_date(zDate,""); @ — part of check-in | | | | 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 | } prevName = fossil_strdup(zName); } if( showDetail ){ @ <li> hyperlink_to_date(zDate,""); @ — part of check-in hyperlink_to_version(zVers); }else{ @ — part of check-in hyperlink_to_version(zVers); @ at hyperlink_to_date(zDate,""); } if( zBr && zBr[0] ){ @ on branch %z(href("%R/timeline?r=%T",zBr))%h(zBr)</a> } @ — %!W(zCom) (user: |
︙ | ︙ | |||
1532 1533 1534 1535 1536 1537 1538 | }else if( zType[0]=='f' ){ objType |= OBJTYPE_FORUM; @ Forum post }else{ @ Tag referencing } if( zType[0]!='e' || eventTagId == 0){ | | | 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 | }else if( zType[0]=='f' ){ objType |= OBJTYPE_FORUM; @ Forum post }else{ @ Tag referencing } if( zType[0]!='e' || eventTagId == 0){ hyperlink_to_version(zUuid); } @ - %!W(zCom) by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate, "."); if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_appendf(pDownloadName, "%S.txt", zUuid); } |
︙ | ︙ | |||
1564 1565 1566 1567 1568 1569 1570 | /* const char *zSrc = db_column_text(&q, 4); */ if( cnt>0 ){ @ Also attachment "%h(zFilename)" to }else{ @ Attachment "%h(zFilename)" to } objType |= OBJTYPE_ATTACHMENT; | | | 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 | /* const char *zSrc = db_column_text(&q, 4); */ if( cnt>0 ){ @ Also attachment "%h(zFilename)" to }else{ @ Attachment "%h(zFilename)" to } objType |= OBJTYPE_ATTACHMENT; if( fossil_is_artifact_hash(zTarget) ){ if ( db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", zTarget) ){ if( g.perm.Hyperlink && g.anon.RdTkt ){ @ ticket [%z(href("%R/tktview?name=%!S",zTarget))%S(zTarget)</a>] }else{ @ ticket [%S(zTarget)] |
︙ | ︙ | |||
1623 1624 1625 1626 1627 1628 1629 | } return objType; } /* ** WEBPAGE: fdiff | | | 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 | } return objType; } /* ** WEBPAGE: fdiff ** URL: fdiff?v1=HASH&v2=HASH ** ** Two arguments, v1 and v2, identify the artifacts to be diffed. ** Show diff side by side unless sbs is 0. Generate plain text if ** "patch" is present, otherwise generate "pretty" HTML. ** ** Alternative URL: fdiff?from=filename1&to=filename2&ci=checkin ** |
︙ | ︙ | |||
1799 1800 1801 1802 1803 1804 1805 | ** Return the uninterpreted content of an artifact. This is similar ** to /raw except in this case the only way to specify the artifact ** is by the full-length SHA1 or SHA3 hash. Abbreviations are not ** accepted. */ void secure_rawartifact_page(void){ int rid = 0; | | | | | 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 | ** Return the uninterpreted content of an artifact. This is similar ** to /raw except in this case the only way to specify the artifact ** is by the full-length SHA1 or SHA3 hash. Abbreviations are not ** accepted. */ void secure_rawartifact_page(void){ int rid = 0; const char *zName = PD("name", ""); login_check_credentials(); if( !g.perm.Read ){ login_needed(g.anon.Read); return; } rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", zName); if( rid==0 ){ cgi_set_status(404, "Not Found"); @ Unknown artifact: "%h(zName)" return; } g.isConst = 1; deliver_artifact(rid, P("m")); } |
︙ | ︙ | |||
2525 2526 2527 2528 2529 2530 2531 | manifest_destroy(pTktChng); style_footer(); } /* ** WEBPAGE: info | | | | > > | | 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 | manifest_destroy(pTktChng); style_footer(); } /* ** WEBPAGE: info ** URL: info/NAME ** ** The NAME argument is any valid artifact name: an artifact hash, ** a timestamp, a tag name, etc. ** ** Because NAME can match so many different things (commit artifacts, ** wiki pages, ticket comments, forum posts...) the format of the output ** page depends on the type of artifact that NAME matches. */ void info_page(void){ const char *zName; Blob uuid; int rid; int rc; int nLen; |
︙ | ︙ | |||
3144 3145 3146 3147 3148 3149 3150 | blob_append(&prompt, zUuid, -1); } blob_append(&prompt, ".\n# Lines beginning with a # are ignored.\n", -1); prompt_for_user_comment(pComment, &prompt); blob_reset(&prompt); } | | | | | 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 | blob_append(&prompt, zUuid, -1); } blob_append(&prompt, ".\n# Lines beginning with a # are ignored.\n", -1); prompt_for_user_comment(pComment, &prompt); blob_reset(&prompt); } #define AMEND_USAGE_STMT "HASH OPTION ?OPTION ...?" /* ** COMMAND: amend ** ** Usage: %fossil amend HASH OPTION ?OPTION ...? ** ** Amend the tags on check-in HASH to change how it displays in the timeline. ** ** Options: ** ** --author USER Make USER the author for check-in ** -m|--comment COMMENT Make COMMENT the check-in comment ** -M|--message-file FILE Read the amended comment from FILE ** -e|--edit-comment Launch editor to revise comment |
︙ | ︙ | |||
3238 3239 3240 3241 3242 3243 3244 | db_find_and_open_repository(0,0); user_select(); verify_all_options(); if( g.argc<3 || g.argc>=4 ) usage(AMEND_USAGE_STMT); rid = name_to_typed_rid(g.argv[2], "ci"); if( rid==0 && !is_a_version(rid) ) fossil_fatal("no such check-in"); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); | | | 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 | db_find_and_open_repository(0,0); user_select(); verify_all_options(); if( g.argc<3 || g.argc>=4 ) usage(AMEND_USAGE_STMT); rid = name_to_typed_rid(g.argv[2], "ci"); if( rid==0 && !is_a_version(rid) ) fossil_fatal("no such check-in"); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); if( zUuid==0 ) fossil_fatal("Unable to find artifact hash"); zComment = db_text(0, "SELECT coalesce(ecomment,comment)" " FROM event WHERE objid=%d", rid); zUser = db_text(0, "SELECT coalesce(euser,user)" " FROM event WHERE objid=%d", rid); zDate = db_text(0, "SELECT datetime(mtime)" " FROM event WHERE objid=%d", rid); zColor = db_text("", "SELECT bgcolor" |
︙ | ︙ | |||
3328 3329 3330 3331 3332 3333 3334 | fossil_free((void *)pzCancelTags); } if( fHide && !fHasHidden ) hide_branch(); if( fClose && !fHasClosed ) close_leaf(rid); if( zNewBranch && zNewBranch[0] ) change_branch(rid,zNewBranch); apply_newtags(&ctrl, rid, zUuid, zUserOvrd, fDryRun); if( fDryRun==0 ){ | | | 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 | fossil_free((void *)pzCancelTags); } if( fHide && !fHasHidden ) hide_branch(); if( fClose && !fHasClosed ) close_leaf(rid); if( zNewBranch && zNewBranch[0] ) change_branch(rid,zNewBranch); apply_newtags(&ctrl, rid, zUuid, zUserOvrd, fDryRun); if( fDryRun==0 ){ show_common_info(rid, "hash:", 1, 0); } if( g.localOpen ){ manifest_to_disk(rid); } } |
Changes to src/json_finfo.c.
︙ | ︙ | |||
87 88 89 90 91 92 93 | if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND, | | | 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | if( zCheckin && *zCheckin ){ char * zU = NULL; int rc = name_to_uuid2( zCheckin, "ci", &zU ); /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ if(rc<=0){ json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND, "Check-in hash %s.", (rc<0) ? "is ambiguous" : "not found"); blob_reset(&sql); return NULL; } blob_append_sql(&sql, " AND ci.uuid='%q'", zU); free(zU); }else{ if( zAfter && *zAfter ){ |
︙ | ︙ |
Changes to src/json_tag.c.
︙ | ︙ | |||
115 116 117 118 119 120 121 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); { Blob uu = empty_blob; int rc; blob_append(&uu, zName, -1); rc = name_to_uuid(&uu, 9, "*"); if(0!=rc){ json_set_err(FSL_JSON_E_UNKNOWN,"Could not convert name back to artifact hash!"); blob_reset(&uu); goto error; } cson_object_set(pay, "appliedTo", json_new_string(blob_buffer(&uu))); blob_reset(&uu); } |
︙ | ︙ |
Changes to src/main.c.
︙ | ︙ | |||
157 158 159 160 161 162 163 | char *zRepositoryOption; /* Most recent cached repository option value */ char *zRepositoryName; /* Name of the repository database file */ char *zLocalDbName; /* Name of the local database file */ char *zOpenRevision; /* Check-in version to use during database open */ const char *zCmdName; /* Name of the Fossil command currently running */ int localOpen; /* True if the local database is open */ char *zLocalRoot; /* The directory holding the local database */ | | | 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | char *zRepositoryOption; /* Most recent cached repository option value */ char *zRepositoryName; /* Name of the repository database file */ char *zLocalDbName; /* Name of the local database file */ char *zOpenRevision; /* Check-in version to use during database open */ const char *zCmdName; /* Name of the Fossil command currently running */ int localOpen; /* True if the local database is open */ char *zLocalRoot; /* The directory holding the local database */ int minPrefix; /* Number of digits needed for a distinct hash */ int eHashPolicy; /* Current hash policy. One of HPOLICY_* */ int fSqlTrace; /* True if --sqltrace flag is present */ int fSqlStats; /* True if --sqltrace or --sqlstats are present */ int fSqlPrint; /* True if --sqlprint flag is present */ int fCgiTrace; /* True if --cgitrace is enabled */ int fQuiet; /* True if -quiet flag is present */ int fJail; /* True if running with a chroot jail */ |
︙ | ︙ |
Changes to src/manifest.c.
︙ | ︙ | |||
2432 2433 2434 2435 2436 2437 2438 | char *zComment = 0; const char isAdd = (p->zAttachSrc && p->zAttachSrc[0]) ? 1 : 0; /* We assume that we're attaching to a wiki page until we ** prove otherwise (which could on a later artifact if we ** process the attachment artifact before the artifact to ** which it is attached!) */ char attachToType = 'w'; | | | 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 | char *zComment = 0; const char isAdd = (p->zAttachSrc && p->zAttachSrc[0]) ? 1 : 0; /* We assume that we're attaching to a wiki page until we ** prove otherwise (which could on a later artifact if we ** process the attachment artifact before the artifact to ** which it is attached!) */ char attachToType = 'w'; if( fossil_is_artifact_hash(p->zAttachTarget) ){ if( db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", p->zAttachTarget) ){ attachToType = 't'; /* Attaching to known ticket */ }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'", p->zAttachTarget) ){ |
︙ | ︙ |
Changes to src/name.c.
︙ | ︙ | |||
444 445 446 447 448 449 450 | } } } return rid; } /* | | > > > | > | | < > > > | 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 | } } } return rid; } /* ** This routine takes a user-entered string and tries to convert it to ** an artifact hash. ** ** We first try to treat the string as an artifact hash, or at least a ** unique prefix of an artifact hash. The input may be in mixed case. ** If we are passed such a string, this routine has the effect of ** converting the hash [prefix] to canonical form. ** ** If the input is not a hash or a hash prefix, then try to resolve ** the name as a tag. If multiple tags match, pick the latest. ** A caller can force this routine to skip the hash case above by ** prefixing the string with "tag:", a useful property when the tag ** may be misinterpreted as a hex ASCII string. (e.g. "decade" or "facade") ** ** If the input is not a tag, then try to match it as an ISO-8601 date ** string YYYY-MM-DD HH:MM:SS and pick the nearest check-in to that date. ** If the input is of the form "date:*" then always resolve the name as ** a date. The forms "utc:*" and "local:" are deprecated. ** ** Return 0 on success. Return 1 if the name cannot be resolved. |
︙ | ︙ | |||
479 480 481 482 483 484 485 | return 0; } } /* ** This routine is similar to name_to_uuid() except in the form it ** takes its parameters and returns its value, and in that it does not | | | | | > | | 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 | return 0; } } /* ** This routine is similar to name_to_uuid() except in the form it ** takes its parameters and returns its value, and in that it does not ** treat errors as fatal. zName must be an artifact hash or prefix of ** a hash. zType is also as described for name_to_uuid(). If ** zName does not resolve, 0 is returned. If it is ambiguous, a ** negative value is returned. On success the rid is returned and ** pUuid (if it is not NULL) is set to a newly-allocated string, ** the full hash, which must eventually be free()d by the caller. */ int name_to_uuid2(const char *zName, const char *zType, char **pUuid){ int rid = symbolic_name_to_rid(zName, zType); if((rid>0) && pUuid){ *pUuid = db_text(NULL, "SELECT uuid FROM blob WHERE rid=%d", rid); } return rid; } /* ** name_collisions searches through events, blobs, and tickets for ** collisions of a given hash based on its length, counting only ** hashes greater than or equal to 4 hex ASCII characters (16 bits) ** in length. */ int name_collisions(const char *zName){ int c = 0; /* count of collisions for zName */ int nLen; /* length of zName */ nLen = strlen(zName); if( nLen>=4 && nLen<=HNAME_MAX && validate16(zName, nLen) ){ c = db_int(0, |
︙ | ︙ | |||
631 632 633 634 635 636 637 | int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); const char *zTitle = db_column_text(&q, 2); @ <li><p><a href="%R/%T(zSrc)/%!S(zUuid)"> @ %s(zUuid)</a> - @ <ul></ul> @ Ticket | | | 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 | int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); const char *zTitle = db_column_text(&q, 2); @ <li><p><a href="%R/%T(zSrc)/%!S(zUuid)"> @ %s(zUuid)</a> - @ <ul></ul> @ Ticket hyperlink_to_version(zUuid); @ - %h(zTitle). @ <ul><li> object_description(rid, 0, 0, 0); @ </li></ul> @ </p></li> } db_finalize(&q); |
︙ | ︙ |
Changes to src/printf.c.
︙ | ︙ | |||
95 96 97 98 99 100 101 | #define etPOINTER 16 /* The %p conversion */ #define etHTMLIZE 17 /* Make text safe for HTML */ #define etHTTPIZE 18 /* Make text safe for HTTP. "/" encoded as %2f */ #define etURLIZE 19 /* Make text safe for HTTP. "/" not encoded */ #define etFOSSILIZE 20 /* The fossil header encoding format. */ #define etPATH 21 /* Path type */ #define etWIKISTR 22 /* Timeline comment text rendered from a char*: %W */ | | | 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | #define etPOINTER 16 /* The %p conversion */ #define etHTMLIZE 17 /* Make text safe for HTML */ #define etHTTPIZE 18 /* Make text safe for HTTP. "/" encoded as %2f */ #define etURLIZE 19 /* Make text safe for HTTP. "/" not encoded */ #define etFOSSILIZE 20 /* The fossil header encoding format. */ #define etPATH 21 /* Path type */ #define etWIKISTR 22 /* Timeline comment text rendered from a char*: %W */ #define etSTRINGID 23 /* String with length limit for a hash prefix: %S */ #define etROOT 24 /* String value of g.zTop: %R */ #define etJSONSTR 25 /* String encoded as a JSON string literal: %j Use %!j to include double-quotes around it. */ /* ** An "etByte" is an 8-bit unsigned value. |
︙ | ︙ |
Changes to src/purge.c.
︙ | ︙ | |||
454 455 456 457 458 459 460 | ** ==== WARNING: This command can potentially destroy historical data and ==== ** ==== leave your repository in a goofy state. Know what you are doing! ==== ** ==== Make a backup of your repository before using this command! ==== ** ** ==== FURTHER WARNING: This command is a work-in-progress and may yet ==== ** ==== contain bugs. ==== ** | | | | | 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 | ** ==== WARNING: This command can potentially destroy historical data and ==== ** ==== leave your repository in a goofy state. Know what you are doing! ==== ** ==== Make a backup of your repository before using this command! ==== ** ** ==== FURTHER WARNING: This command is a work-in-progress and may yet ==== ** ==== contain bugs. ==== ** ** fossil purge artifacts HASH... ?OPTIONS? ** ** Move arbitrary artifacts identified by the HASH list into the ** graveyard. ** ** fossil purge cat HASH... ** ** Write the content of one or more artifacts in the graveyard onto ** standard output. ** ** fossil purge checkins TAGS... ?OPTIONS? ** ** Move the check-ins or branches identified by TAGS and all of |
︙ | ︙ | |||
507 508 509 510 511 512 513 | ** ** COMMON OPTIONS: ** ** --explain Make no changes, but show what would happen. ** --dry-run An alias for --explain ** ** SUMMARY: | | | | 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 | ** ** COMMON OPTIONS: ** ** --explain Make no changes, but show what would happen. ** --dry-run An alias for --explain ** ** SUMMARY: ** fossil purge artifacts HASH.. [OPTIONS] ** fossil purge cat HASH... ** fossil purge checkins TAGS... [OPTIONS] ** fossil purge files FILENAME... [OPTIONS] ** fossil purge list ** fossil purge obliterate ID... ** fossil purge tickets NAME... [OPTIONS] ** fossil purge undo ID ** fossil purge wiki NAME... [OPTIONS] |
︙ | ︙ | |||
545 546 547 548 549 550 551 | } describe_artifacts_to_stdout("IN ok", 0); purge_artifact_list("ok", "", purgeFlags); db_end_transaction(0); }else if( strncmp(zSubcmd, "cat", n)==0 ){ int i, piid; Blob content; | | | 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 | } describe_artifacts_to_stdout("IN ok", 0); purge_artifact_list("ok", "", purgeFlags); db_end_transaction(0); }else if( strncmp(zSubcmd, "cat", n)==0 ){ int i, piid; Blob content; if( g.argc<4 ) usage("cat HASH..."); for(i=3; i<g.argc; i++){ piid = db_int(0, "SELECT piid FROM purgeitem WHERE uuid LIKE '%q%%'", g.argv[i]); if( piid==0 ) fossil_fatal("no such item: %s", g.argv[3]); purge_extract_item(piid, &content); blob_write_to_file(&content, "-"); blob_reset(&content); |
︙ | ︙ |
Changes to src/rebuild.c.
︙ | ︙ | |||
1147 1148 1149 1150 1151 1152 1153 | ** Helper functions used by the `deconstruct' and `reconstruct' commands to ** save and restore the contents of the PRIVATE table. */ void private_export(char *zFileName) { Stmt q; Blob fctx = empty_blob; | | | 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 | ** Helper functions used by the `deconstruct' and `reconstruct' commands to ** save and restore the contents of the PRIVATE table. */ void private_export(char *zFileName) { Stmt q; Blob fctx = empty_blob; blob_append(&fctx, "# The hashes of private artifacts\n", -1); db_prepare(&q, "SELECT uuid FROM blob WHERE rid IN ( SELECT rid FROM private );"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); blob_append(&fctx, zUuid, -1); blob_append(&fctx, "\n", -1); } |
︙ | ︙ |
Changes to src/report.c.
︙ | ︙ | |||
668 669 670 671 672 673 674 | void *pUser, /* Pointer to output state */ int nArg, /* Number of columns in this result row */ const char **azArg, /* Text of data in all columns */ const char **azName /* Names of the columns */ ){ struct GenerateHTML *pState = (struct GenerateHTML*)pUser; int i; | | | 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 | void *pUser, /* Pointer to output state */ int nArg, /* Number of columns in this result row */ const char **azArg, /* Text of data in all columns */ const char **azName /* Names of the columns */ ){ struct GenerateHTML *pState = (struct GenerateHTML*)pUser; int i; const char *zTid; /* Ticket hash. (value of column named '#') */ const char *zBg = 0; /* Use this background color */ /* Do initialization */ if( pState->nCount==0 ){ /* Turn off the authorizer. It is no longer doing anything since the ** query has already been prepared. |
︙ | ︙ |
Changes to src/rss.c.
︙ | ︙ | |||
20 21 22 23 24 25 26 | #include "config.h" #include <time.h> #include "rss.h" #include <assert.h> /* ** WEBPAGE: timeline.rss | | | | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | #include "config.h" #include <time.h> #include "rss.h" #include <assert.h> /* ** WEBPAGE: timeline.rss ** URL: /timeline.rss?y=TYPE&n=LIMIT&tkt=HASH&tag=TAG&wiki=NAME&name=FILENAME ** ** Produce an RSS feed of the timeline. ** ** TYPE may be: all, ci (show check-ins only), t (show ticket changes only), ** w (show wiki only), e (show tech notes only), f (show forum posts only), ** g (show tag/branch changes only). ** ** LIMIT is the number of items to show. ** ** tkt=HASH filters for only those events for the specified ticket. tag=TAG ** filters for a tag, and wiki=NAME for a wiki page. Only one may be used. ** ** In addition, name=FILENAME filters for a specific file. This may be ** combined with one of the other filters (useful for looking at a specific ** branch). */ |
︙ | ︙ | |||
229 230 231 232 233 234 235 | ** -type|y FLAG ** may be: all (default), ci (show check-ins only), t (show tickets only), ** w (show wiki only). ** ** -limit|n LIMIT ** The maximum number of items to show. ** | | | 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | ** -type|y FLAG ** may be: all (default), ci (show check-ins only), t (show tickets only), ** w (show wiki only). ** ** -limit|n LIMIT ** The maximum number of items to show. ** ** -tkt HASH ** Filters for only those events for the specified ticket. ** ** -tag TAG ** filters for a tag ** ** -wiki NAME ** Filters on a specific wiki page. |
︙ | ︙ |
Changes to src/schema.c.
︙ | ︙ | |||
319 320 321 322 323 324 325 | @ comment TEXT, -- Comment describing the event @ brief TEXT, -- Short comment when tagid already seen @ omtime DATETIME -- Original unchanged date+time, or NULL @ ); @ CREATE INDEX event_i1 ON event(mtime); @ @ -- A record of phantoms. A phantom is a record for which we know the | | | 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | @ comment TEXT, -- Comment describing the event @ brief TEXT, -- Short comment when tagid already seen @ omtime DATETIME -- Original unchanged date+time, or NULL @ ); @ CREATE INDEX event_i1 ON event(mtime); @ @ -- A record of phantoms. A phantom is a record for which we know the @ -- file hash but we do not (yet) know the file content. @ -- @ CREATE TABLE phantom( @ rid INTEGER PRIMARY KEY -- Record ID of the phantom @ ); @ @ -- A record of orphaned delta-manifests. An orphan is a delta-manifest @ -- for which we have content, but its baseline-manifest is a phantom. |
︙ | ︙ | |||
360 361 362 363 364 365 366 | @ rid INTEGER PRIMARY KEY -- Record ID of the phantom @ ); @ @ -- Each artifact can have one or more tags. A tag @ -- is defined by a row in the next table. @ -- @ -- Wiki pages are tagged with "wiki-NAME" where NAME is the name of | | | | 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 | @ rid INTEGER PRIMARY KEY -- Record ID of the phantom @ ); @ @ -- Each artifact can have one or more tags. A tag @ -- is defined by a row in the next table. @ -- @ -- Wiki pages are tagged with "wiki-NAME" where NAME is the name of @ -- the wiki page. Tickets changes are tagged with "ticket-HASH" where @ -- HASH is the indentifier of the ticket. Tags used to assign symbolic @ -- names to baselines are branches are of the form "sym-NAME" where @ -- NAME is the symbolic name. @ -- @ CREATE TABLE tag( @ tagid INTEGER PRIMARY KEY, -- Numeric tag ID @ tagname TEXT UNIQUE -- Tag name. @ ); |
︙ | ︙ | |||
419 420 421 422 423 424 425 | @ -- Each attachment is an entry in the following table. Only @ -- the most recent attachment (identified by the D card) is saved. @ -- @ CREATE TABLE attachment( @ attachid INTEGER PRIMARY KEY, -- Local id for this attachment @ isLatest BOOLEAN DEFAULT 0, -- True if this is the one to use @ mtime TIMESTAMP, -- Last changed. Julian day. | | | | 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | @ -- Each attachment is an entry in the following table. Only @ -- the most recent attachment (identified by the D card) is saved. @ -- @ CREATE TABLE attachment( @ attachid INTEGER PRIMARY KEY, -- Local id for this attachment @ isLatest BOOLEAN DEFAULT 0, -- True if this is the one to use @ mtime TIMESTAMP, -- Last changed. Julian day. @ src TEXT, -- Hash of the attachment. NULL to delete @ target TEXT, -- Object attached to. Wikiname or Tkt hash @ filename TEXT, -- Filename for the attachment @ comment TEXT, -- Comment associated with this attachment @ user TEXT -- Name of user adding attachment @ ); @ CREATE INDEX attachment_idx1 ON attachment(target, filename, mtime); @ CREATE INDEX attachment_idx2 ON attachment(src); @ |
︙ | ︙ |
Changes to src/tar.c.
︙ | ︙ | |||
464 465 466 467 468 469 470 | ** If the RID object does not exist in the repository, then ** pTar is zeroed. ** ** zDir is a "synthetic" subdirectory which all files get ** added to as part of the tarball. It may be 0 or an empty string, in ** which case it is ignored. The intention is to create a tarball which ** politely expands into a subdir instead of filling your current dir | | | 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 | ** If the RID object does not exist in the repository, then ** pTar is zeroed. ** ** zDir is a "synthetic" subdirectory which all files get ** added to as part of the tarball. It may be 0 or an empty string, in ** which case it is ignored. The intention is to create a tarball which ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass an artifact hash or "ProjectName". ** */ void tarball_of_checkin( int rid, /* The RID of the checkin from which to form a tarball */ Blob *pTar, /* Write the tarball into this blob */ const char *zDir, /* Directory prefix for all file added to tarball */ Glob *pInclude, /* Only add files matching this pattern */ |
︙ | ︙ |
Changes to src/timeline.c.
︙ | ︙ | |||
46 47 48 49 50 51 52 | cgi_printf(" %s", UNPUB_TAG); } } /* ** Generate a hyperlink to a version. */ | | | | | 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | cgi_printf(" %s", UNPUB_TAG); } } /* ** Generate a hyperlink to a version. */ void hyperlink_to_version(const char *zVerHash){ if( g.perm.Hyperlink ){ @ %z(chref("timelineHistLink","%R/info/%!S",zVerHash))[%S(zVerHash)]</a> }else{ @ <span class="timelineHistDsp">[%S(zVerHash)]</span> } } /* ** Generate a hyperlink to a date & time. */ void hyperlink_to_date(const char *zDate, const char *zSuffix){ |
︙ | ︙ | |||
99 100 101 102 103 104 105 | #define TIMELINE_GRAPH 0x0000008 /* Compute a graph */ #define TIMELINE_DISJOINT 0x0000010 /* Elements are not contiguous */ #define TIMELINE_FCHANGES 0x0000020 /* Detail file changes */ #define TIMELINE_BRCOLOR 0x0000040 /* Background color by branch name */ #define TIMELINE_UCOLOR 0x0000080 /* Background color by user */ #define TIMELINE_FRENAMES 0x0000100 /* Detail only file name changes */ #define TIMELINE_UNHIDE 0x0000200 /* Unhide check-ins with "hidden" tag */ | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | #define TIMELINE_GRAPH 0x0000008 /* Compute a graph */ #define TIMELINE_DISJOINT 0x0000010 /* Elements are not contiguous */ #define TIMELINE_FCHANGES 0x0000020 /* Detail file changes */ #define TIMELINE_BRCOLOR 0x0000040 /* Background color by branch name */ #define TIMELINE_UCOLOR 0x0000080 /* Background color by user */ #define TIMELINE_FRENAMES 0x0000100 /* Detail only file name changes */ #define TIMELINE_UNHIDE 0x0000200 /* Unhide check-ins with "hidden" tag */ #define TIMELINE_SHOWRID 0x0000400 /* Show RID values in addition to hashes */ #define TIMELINE_BISECT 0x0000800 /* Show supplimental bisect information */ #define TIMELINE_COMPACT 0x0001000 /* Use the "compact" view style */ #define TIMELINE_VERBOSE 0x0002000 /* Use the "detailed" view style */ #define TIMELINE_MODERN 0x0004000 /* Use the "modern" view style */ #define TIMELINE_COLUMNAR 0x0008000 /* Use the "columns" view style */ #define TIMELINE_CLASSIC 0x0010000 /* Use the "classic" view style */ #define TIMELINE_VIEWS 0x001f000 /* Mask for all of the view styles */ |
︙ | ︙ | |||
227 228 229 230 231 232 233 | } /* ** Output a timeline in the web format given a query. The query ** should return these columns: ** ** 0. rid | | | 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | } /* ** Output a timeline in the web format given a query. The query ** should return these columns: ** ** 0. rid ** 1. artifact hash ** 2. Date/Time ** 3. Comment string ** 4. User ** 5. True if is a leaf ** 6. background color ** 7. type ("ci", "w", "t", "e", "g", "f", "div") ** 8. list of symbolic tags. |
︙ | ︙ | |||
531 532 533 534 535 536 537 | if( tmFlags & TIMELINE_COMPACT ){ @ <span class='timelineCompactComment' data-id='%d(rid)'> }else{ @ <span class='timeline%s(zStyle)Comment'> } if( (tmFlags & TIMELINE_CLASSIC)!=0 ){ if( zType[0]=='c' ){ | | | | 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 | if( tmFlags & TIMELINE_COMPACT ){ @ <span class='timelineCompactComment' data-id='%d(rid)'> }else{ @ <span class='timeline%s(zStyle)Comment'> } if( (tmFlags & TIMELINE_CLASSIC)!=0 ){ if( zType[0]=='c' ){ hyperlink_to_version(zUuid); if( isLeaf ){ if( db_exists("SELECT 1 FROM tagxref" " WHERE rid=%d AND tagid=%d AND tagtype>0", rid, TAG_CLOSED) ){ @ <span class="timelineLeaf">Closed-Leaf:</span> }else{ @ <span class="timelineLeaf">Leaf:</span> } } }else if( zType[0]=='e' && tagid ){ hyperlink_to_event_tagid(tagid<0?-tagid:tagid); }else if( (tmFlags & TIMELINE_ARTID)!=0 ){ hyperlink_to_version(zUuid); } if( tmFlags & TIMELINE_SHOWRID ){ int srcId = delta_source_rid(rid); if( srcId ){ @ (%d(rid)←%d(srcId)) }else{ @ (%d(rid)) |
︙ | ︙ |
Changes to src/tktsetup.c.
︙ | ︙ | |||
446 447 448 449 450 451 452 | 0, 40 ); } static const char zDefaultView[] = @ <table cellpadding="5"> | | | 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 | 0, 40 ); } static const char zDefaultView[] = @ <table cellpadding="5"> @ <tr><td class="tktDspLabel">Ticket Hash:</td> @ <th1> @ if {[info exists tkt_uuid]} { @ html "<td class='tktDspValue' colspan='3'>" @ copybtn hash-tk 0 $tkt_uuid 2 @ if {[hascap s]} { @ html " ($tkt_id)" @ } |
︙ | ︙ |
Changes to src/util.c.
︙ | ︙ | |||
414 415 416 417 418 419 420 | #endif } /* ** Returns TRUE if zSym is exactly HNAME_LEN_SHA1 or HNAME_LEN_K256 ** bytes long and contains only lower-case ASCII hexadecimal values. */ | | | 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 | #endif } /* ** Returns TRUE if zSym is exactly HNAME_LEN_SHA1 or HNAME_LEN_K256 ** bytes long and contains only lower-case ASCII hexadecimal values. */ int fossil_is_artifact_hash(const char *zSym){ int sz = zSym ? (int)strlen(zSym) : 0; return (HNAME_LEN_SHA1==sz || HNAME_LEN_K256==sz) && validate16(zSym, sz); } /* ** Return true if the input string is NULL or all whitespace. ** Return false if the input string contains text. |
︙ | ︙ |
Changes to src/zip.c.
︙ | ︙ | |||
612 613 614 615 616 617 618 | ** If the RID object does not exist in the repository, then ** pZip is zeroed. ** ** zDir is a "synthetic" subdirectory which all zipped files get ** added to as part of the zip file. It may be 0 or an empty string, ** in which case it is ignored. The intention is to create a zip which ** politely expands into a subdir instead of filling your current dir | | | 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 | ** If the RID object does not exist in the repository, then ** pZip is zeroed. ** ** zDir is a "synthetic" subdirectory which all zipped files get ** added to as part of the zip file. It may be 0 or an empty string, ** in which case it is ignored. The intention is to create a zip which ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass a commit hash or "ProjectName". ** */ static void zip_of_checkin( int eType, /* Type of archive (ZIP or SQLAR) */ int rid, /* The RID of the checkin to build the archive from */ Blob *pZip, /* Write the archive content into this blob */ const char *zDir, /* Top-level directory of the archive */ |
︙ | ︙ |
Changes to test/amend.test.
︙ | ︙ | |||
24 25 26 27 28 29 30 | } proc manifest_comment {comment} { string map [list { } {\\s} \n {\\n} \r {\\r}] $comment } proc uuid_from_commit {res var} { | | | | | | | | | | | | | | 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | } proc manifest_comment {comment} { string map [list { } {\\s} \n {\\n} \r {\\r}] $comment } proc uuid_from_commit {res var} { upvar $var HASH regexp {^New_Version: ([0-9a-f]{40})[0-9a-f]*$} $res m HASH } proc uuid_from_branch {res var} { upvar $var HASH regexp {^New branch: ([0-9a-f]{40})[0-9a-f]*$} $res m HASH } proc uuid_from_checkout {var} { global RESULT upvar $var HASH fossil status regexp {checkout:\s+([0-9a-f]{40})[0-9a-f]*} $RESULT m HASH } # Make sure we are not in an open repository and initialize new repository test_setup ######################################## # Setup: Add file and commit # ######################################## if {![uuid_from_checkout HASHINIT]} { test amend-checkout-failure false test_cleanup_then_return } write_file datafile "data" fossil add datafile fossil commit -m "c1" if {![uuid_from_commit $RESULT HASH]} { test amend-setup-failure false test_cleanup_then_return } ######################################## # Test: -branch # ######################################## set HASHB HASHB write_file datafile "data.file" fossil commit -m "c2" if {![uuid_from_commit $RESULT HASHB]} { test amend-branch.setup false } fossil amend $HASHB -branch amended-branch test amend-branch-1.1 {[regexp {tags:\s+amended-branch} $RESULT]} fossil branch ls test amend-branch-1.2 {[string first "* amended-branch" $RESULT] != -1} fossil tag list test amend-branch-1.3 {[string first amended-branch $RESULT] != -1} fossil tag list --raw $HASHB test amend-branch-1.4 {[string first "branch=amended-branch" $RESULT] != -1} test amend-branch-1.5 {[string first "sym-amended-branch" $RESULT] != -1} fossil timeline -n 1 test amend-branch-1.6 {[string match {*Move*to*branch*amended-branch*} $RESULT]} ######################################## # Test: -bgcolor # |
︙ | ︙ | |||
100 101 102 103 104 105 106 | acf #acf 123 #123 #1234 #1234 1234 1234 123456 #123456 } { incr tc | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | acf #acf 123 #123 #1234 #1234 1234 1234 123456 #123456 } { incr tc fossil amend $HASH -bgcolor $color test amend-bgcolor-1.$tc.a {[string match "*uuid:*$HASH*" $RESULT]} fossil tag list --raw $HASH test amend-bgcolor-1.$tc.b {[string first "bgcolor=$result" $RESULT] != -1} fossil timeline -n 1 test amend-bgcolor-1.$tc.c { [string match "*Change*background*color*to*\"$result\"*" $RESULT] } if {[artifact_from_timeline $RESULT artid]} { fossil artifact $artid test amend-bgcolor-1.$tc.d { [string match "*T +bgcolor $HASH* $result*" $RESULT] } } else { if {$VERBOSE} { protOut "No artifact found in timeline output" } test amend-bgcolor-1.$tc.d false } } fossil amend $HASH -bgcolor {} test amend-bgcolor-2.1 {[string match "*uuid:*$HASH*" $RESULT]} fossil tag list --raw $HASH test amend-bgcolor-2.2 { [string first "bgcolor=" $RESULT] == -1 && [string first "bgcolor" $RESULT] != -1 } fossil timeline -n 1 test amend-bgcolor-2.3 {[string match "*Cancel*background*color.*" $RESULT]} if {[artifact_from_timeline $RESULT artid]} { fossil artifact $artid test amend-bgcolor-2.4 {[string match "*T -bgcolor $HASH*" $RESULT]} } else { if {$VERBOSE} { protOut "No artifact found in timeline output" } test amend-bgcolor-2.4 false } ######################################## # Test: -branchcolor # ######################################## set HASH2 HASH2 fossil branch new brclr $HASH if {![uuid_from_branch $RESULT HASH2]} { test amend-branchcolor.setup false } fossil update $HASH2 fossil amend $HASH2 -branchcolor yellow test amend-branchcolor-1.1 {[string match "*uuid:*$HASH2*" $RESULT]} fossil tag ls --raw $HASH2 test amend-branchcolor-1.2 {[string first "bgcolor=yellow" $RESULT] != -1} fossil timeline -n 1 test amend-branchcolor-1.3 { [string match {*Change*branch*background*color*to*"yellow".*} $RESULT] } if {[regexp {(?x)[0-9]{2}(?::[0-9]{2}){2}\s+\[([0-9a-f]+)]} $RESULT m artid]} { fossil artifact $artid test amend-branchcolor-1.4 { [string match "*T \*bgcolor $HASH2* yellow*" $RESULT] } } else { if {$VERBOSE} { protOut "No artifact found in timeline output" } test amend-branchcolor-1.4 false } set HASHN HASHN write_file datafile "brclr" fossil commit -m "brclr" if {![uuid_from_commit $RESULT HASHN]} { test amend-branchcolor-propagating.setup false } write_file datafile "bc1" fossil commit -m "mc1" write_file datafile "bc2" fossil commit -m "mc2" fossil amend $HASHN -branchcolor deadbe test amend-branchcolor-2.1 {[string match "*uuid:*$HASHN*" $RESULT]} fossil tag ls --raw current test amend-branchcolor-2.2 {[string first "bgcolor=#deadbe" $RESULT] != -1} fossil timeline -n 1 test amend-branchcolor-2.3 { [string match {*Change*branch*background*color*to*"#deadbe".*} $RESULT] } ######################################## # Test: -author # ######################################## fossil amend $HASH -author author-test test amend-author-1.1 {[string match {*comment:*(user:*author-test)*} $RESULT]} fossil tag ls --raw $HASH test amend-author-1.2 {[string first "user=author-test" $RESULT] != -1} fossil timeline -n 1 test amend-author-1.3 {[string match {*Change*user*to*"author-test".*} $RESULT]} ######################################## # Test: -date # ######################################## set timestamp [clock scan yesterday] set date [clock format $timestamp -format "%Y-%m-%d" -gmt 1] set time [clock format $timestamp -format "%H:%M:%S" -gmt 1] set datetime "$date $time" fossil amend $HASHINIT -date $datetime test amend-date-1.1 {[string match "*uuid:*$HASHINIT*$datetime*" $RESULT]} fossil tag ls --raw $HASHINIT test amend-date-1.2 {[string first "date=$datetime" $RESULT] != -1} fossil timeline -n 1 test amend-date-1.3 {[string match "*Timestamp*$date*$time*" $RESULT]} set badformats { "%+" "%Y-%m-%d %H:%M%:%S %Z" "%d/%m/%Y %H:%M%:%S %Z" "%d/%m/%Y %H:%M%:%S" "%d/%m/%Y" } set sc 0 foreach badformat $badformats { incr sc set datetime [clock format $timestamp -format $badformat -gmt 1] fossil amend $HASHINIT -date $datetime -expectError test amend-date-2.$sc {[string first "YYYY-MM-DD HH:MM:SS" $RESULT] != -1} } ######################################## # Test: -hide # ######################################## set HASHH HASHH fossil revert fossil update trunk fossil branch new tohide current if {![uuid_from_branch $RESULT HASHH]} { test amend-hide-setup false } fossil amend $HASHH -hide test amend-hide-1.1 {[string match "*uuid:*$HASHH*" $RESULT]} fossil tag ls --raw $HASHH test amend-hide-1.2 {[string first "hidden" $RESULT] != -1} fossil timeline -n 1 test amend-hide-1.3 {[string match {*Add*propagating*"hidden".*} $RESULT]} ######################################## # Test: -close # ######################################## set HASHC HASHC fossil branch new cllf $HASH if {![uuid_from_branch $RESULT HASHC]} { test amend-close.setup false } fossil update $HASHC fossil amend $HASHC -close test amend-close-1.1.a {[string match "*uuid:*$HASHC*" $RESULT]} test amend-close-1.1.b { [string match "*comment:*Create*new*branch*named*\"cllf\"*" $RESULT] } fossil tag ls --raw $HASHC test amend-close-1.2 {[string first "closed" $RESULT] != -1} fossil timeline -n 1 test amend-close-1.3 {[string match {*Mark*"Closed".*} $RESULT]} write_file datafile "cllf" fossil commit -m "should fail" -expectError test amend-close-2 {[string first "closed leaf" $RESULT] != -1} set HASH3 HASH3 fossil revert fossil update trunk write_file datafile "cb" fossil commit -m "closed-branch" --branch "closebranch" if {![uuid_from_commit $RESULT HASH3]} { test amend-close-3.setup false } write_file datafile "b1" fossil commit -m "m1" write_file datafile "b2" fossil commit -m "m2" fossil amend $HASH3 --close test amend-close-3.1 {[string match "*uuid:*$HASH3*" $RESULT]} fossil tag ls --raw current test amend-close-3.2 {[string first "closed" $RESULT] != -1} fossil timeline -n 1 test amend-close-3.3 { [string match "*Add*propagating*\"closed\".*" $RESULT] } write_file datafile "changed" |
︙ | ︙ | |||
308 309 310 311 312 313 314 | } foreach res $result { append t1exp ", $res" append t2exp "sym-$res*" append t3exp "Add*tag*\"$res\".*" append t5exp "Cancel*tag*\"$res\".*" } | | | | | | | | | | | | | | | | | | | | | | | | | | | 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 | } foreach res $result { append t1exp ", $res" append t2exp "sym-$res*" append t3exp "Add*tag*\"$res\".*" append t5exp "Cancel*tag*\"$res\".*" } eval fossil amend $HASH $tags test amend-tag-$tc.1 {[string match "*uuid:*$HASH*tags:*$t1exp*" $RESULT]} fossil tag ls --raw $HASH test amend-tag-$tc.2 {[string match $t2exp $RESULT]} fossil timeline -n 1 test amend-tag-$tc.3 {[string match $t3exp $RESULT]} eval fossil amend $HASH $cancels test amend-tag-$tc.4 {![string match "*tags:*$t1exp*" $RESULT]} fossil timeline -n 1 test amend-tag-$tc.5 {[string match $t5exp $RESULT]} } ######################################## # Test: -comment # ######################################## proc prep-test {comment content} { global HASH RESULT fossil revert fossil update trunk write_file datafile $comment fossil commit -m $content if {![uuid_from_commit $RESULT HASH]} { set HASH "" } } proc test-comment {name HASH comment} { global VERBOSE RESULT test amend-comment-$name.1 { [string match "*uuid:*$HASH*comment:*$comment*" $RESULT] } fossil timeline -n 1 if {[artifact_from_timeline $RESULT artid]} { fossil artifact $artid test amend-comment-$name.2 { [string match "*T +comment $HASH* *[manifest_comment $comment]*" $RESULT] } } else { if {$VERBOSE} { protOut "No artifact found in timeline output: $RESULT" } test amend-comment-$name.2 false } fossil timeline -n 1 test amend-comment-$name.3 { [string match "*[short_uuid $HASH]*Edit*check-in*comment.*" $RESULT] } fossil info $HASH test amend-comment-$name.4 { [string match "*uuid:*$HASH*comment:*$comment*" $RESULT] } } prep-test "revision 1" "revision 1" fossil amend $HASH -comment "revised revision 1" test-comment 1 $HASH "revised revision 1" prep-test "revision 2" "revision 2" fossil amend $HASH -m "revised revision 2 with -m" test-comment 2 $HASH "revised revision 2 with -m" prep-test "revision 3" "revision 3" write_file commitmsg "revision 3 revised" fossil amend $HASH -message-file commitmsg test-comment 3 $HASH "revision 3 revised" prep-test "revision 4" "revision 4" write_file commitmsg "revision 4 revised with -M" fossil amend $HASH -M commitmsg test-comment 4 $HASH "revision 4 revised with -M" prep-test "final comment" "final content" if {[catch {exec which ed} result]} { if {$VERBOSE} { protOut "Install ed for interactive comment test: $result" } test-comment 5 $HASH "ed required for interactive edit" } else { fossil settings editor "ed -s" set comment "interactive edited comment" fossil_maybe_answer "a\n$comment\n.\nw\nq\n" amend $HASH --edit-comment test-comment 5 $HASH $comment } ######################################## # Test: NULL hash # ######################################## fossil amend {} -close -expectError test amend-null-uuid {$CODE && [string first "no such check-in" $RESULT] != -1} ############################################################################### test_cleanup |
Changes to www/concepts.wiki.
︙ | ︙ | |||
78 79 80 81 82 83 84 | is a duplicate of a remote repository. Communication between repositories is via HTTP. Remote repositories are identified by URL. You can also point a web browser at a repository and get human-readable status, history, and tracking information about the project. | | | 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | is a duplicate of a remote repository. Communication between repositories is via HTTP. Remote repositories are identified by URL. You can also point a web browser at a repository and get human-readable status, history, and tracking information about the project. <h3><a id="artifacts"></a>2.1 Identification Of Artifacts</h3> A particular version of a particular file is called an "artifact". Each artifact has a universally unique name which is the <a href="http://en.wikipedia.org/wiki/SHA1">SHA1</a> or <a href="http://en.wikipedia.org/wiki/SHA3">SHA3-256</a> hash of the content of that file expressed as either 40 or 64 characters of lower-case hexadecimal. (See the [./hashpolicy.wiki|hash policy |
︙ | ︙ |
Changes to www/fiveminutes.wiki.
︙ | ︙ | |||
56 57 58 59 60 61 62 | otherwise just the listed file(s) will be checked in. <h2>Compare two revisions of a file</h2> <p>If you wish to compare the last revision of a file and its checked out version in your work directory:</p> <p>fossil gdiff myfile.c</p> <p>If you wish to compare two different revisions of a file in the repository:</p> | | | | 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | otherwise just the listed file(s) will be checked in. <h2>Compare two revisions of a file</h2> <p>If you wish to compare the last revision of a file and its checked out version in your work directory:</p> <p>fossil gdiff myfile.c</p> <p>If you wish to compare two different revisions of a file in the repository:</p> <p>fossil finfo myfile: Note the first hash, which is the hash of the commit when the file was committed</p> <p>fossil gdiff --from HASH#1 --to HASH#2 myfile.c</p> <h2>Cancel changes and go back to previous revision</h2> <p>fossil revert myfile.c</p> <p>Fossil does not prompt when reverting a file. It simply reminds the user about the "undo" command, just in case the revert was a mistake.</p> |
Changes to www/json-api/api-artifact.md.
︙ | ︙ | |||
12 13 14 15 16 17 18 | <a id="checkin"></a> # Checkin Artifacts (Commits) Returns information about checkin artifacts (commits). **Status:** implemented 201110xx | | | | | | | | | | > | | | | | > | 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | <a id="checkin"></a> # Checkin Artifacts (Commits) Returns information about checkin artifacts (commits). **Status:** implemented 201110xx **Request:** `/json/artifact/COMMIT_HASH` **Required permissions:** "o" (was "h" prior to 20120408) **Response payload example: (CHANGED SIGNIFICANTLY ON 20120713)** ```json { "type":"checkin", "name":"18dd383e5e7684e", // as given on CLI "uuid":"18dd383e5e7684ecee327d3de7d3ff846069d1b2", "isLeaf":false, "user":"drh", "comment":"Merge wideAnnotateUser and jsonWarnings into trunk.", "timestamp":1330090810, "parents":[ // 1st entry is primary parent hash: "3a44f95f40a193739aaafc2409f155df43e74a6f", // Remaining entries are merged-in branch hashes: "86f6e675eb3f8761d70d8b82b052ce2b297fffd2",\ "dbf4ecf414881c9aad6f4f125dab9762589ef3d7"\ ], "tags":["trunk"], "files":[{ "name":"src/diff.c", // BLOB hashes, NOT commit hashes: "uuid":"78c74c3b37e266f8f7e570d5cf476854b7af9d76", "parent":"b1fa7e636cf4e7b6ed20bba2d2680397f80c096a", "state":"modified", "downloadPath":"/raw/src/diff.c?name=78c74c3b37e266f8f7e570d5cf476854b7af9d76" }, ...] } ``` The "parents" property lists the parent hashes of the checkin. The "parent" property of file entries refers to the parent hash of that file. In the case of a merge there may be essentially an arbitrary number. The first entry in the list is the "primary" parent. The primary parent is the parent which was not pulled in via a merge operation. The ordering of remaining entries is unspecified and may even change between calls. For example: if, from branch C, we merge in A and B and then commit, then in the artifact response for that commit the hash of branch C will be in the first (primary) position, with the hashes for branches A and B in the following entries (in an unspecified, and possibly unstable, order). Note that the "uuid" and "parent" properties of the "files" entries refer to raw blob hashes, not commit (a.k.a. check-in) hashes. See also [the UUID vs. Hash discussion][uvh]. <a id="file"></a> # File Artifacts Fetches information about file artifacts. **FIXME:** the content type guessing is currently very primitive, and may (but i haven't seen this) mis-diagnose some non-binary files as binary. Fossil doesn't yet have a mechanism for mime-type mappings. **Status:** implemented 20111020 **Required permissions:** "o" **Request:** `/json/artifact/FILE_HASH` **Request options:** - `format=(raw|html|none)` (default=none). If set, the contents of the artifact are included if they are text, else they are not (JSON does not do binary). The "html" flag runs it through the wiki parser. The results of doing so are unspecified for non-embedded-doc files. The "raw" format means to return the content as-is. "none" is the same as not specifying this flag, and elides the content from the response. - DEPRECATED (use format instead): `includeContent=bool` (=false) (CLI: `--content|-c`). If true, the full content of the artifact is returned for text-only artifacts (but not for non-text artifacts). As of 20120713 this option is only inspected if "format" is not specified. **Response payload example: (CHANGED SIGNIFICANTLY ON 20120713)** ```json { "type":"file", "name":"same name specified as FILE_HASH argument", "size": 12345, // in bytes, independent of format=... "parent": "hash of parent file blob. Not set for first generation.", "checkins":[{ "name":"src/json_detail.h", "timestamp":1319058803, "comment":"...", "user":"stephan", "checkin":"d2c1ae23a90b24f6ca1d7637193a59d5ecf3e680", "branch":"json", "state":"added|modified|removed" }, ...], /* The following "content" properties are only set if format=raw|html */ "content": "file contents", "contentSize": "size of content field, in bytes. Affected by the format option!", "contentType": "text/plain", /* currently always text/plain */ "contentFormat": "html|raw" } ``` The "checkins" array lists all checkins which include this file, and a file might have different names across different branches. The size and hash, however, are the same across all checkins for a given blob. <a id="wiki"></a> # Wiki Artifacts Returns information about wiki artifacts. **Status:** implemented 20111020, fixed to return the requested version (instead of the latest) on 20120302. **Request:** `/json/artifact/WIKI_HASH` **Required permissions:** "j" **Options:** - DEPRECATED (use format instead): `bool includeContent` (=false). If true then the raw content is returned with the wiki page, else no content is returned.\ CLI: `--includeContent|-c` - The `--format` option works as for [`/json/wiki/get`](api-wiki.md#get), and if set then it implies the `includeContent` option. **Response payload example:** Currently the same as [`/json/wiki/get`](api-wiki.md#get). [uvh]: ../hashes.md#uvh |
Changes to www/json-api/api-diff.md.
︙ | ︙ | |||
31 32 33 34 35 36 37 | "to":"96920e7c04746c55ceed6e24fc82879886cb8197", "diff":"@@ -1,7 +1,7 @@\\n-C factored\\\\sout..." } ``` TODOs: | | | | | 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | "to":"96920e7c04746c55ceed6e24fc82879886cb8197", "diff":"@@ -1,7 +1,7 @@\\n-C factored\\\\sout..." } ``` TODOs: - Unlike the standard diff command, which apparently requires a commit hash, this one diffs individual file versions. If a commit hash is provided, a diff of the manifests is returned. (That should be considered a bug - we should return a combined diff in that case.) - If hashes from two different types of artifacts are given, results are unspecified. Garbage in, garbage out, and all that. - For file diffs, add the file name(s) to the response payload. |
Changes to www/json-api/api-tag.md.
︙ | ︙ | |||
45 46 47 48 49 50 51 | "value":"abc", "propagate":true, "raw":false, "appliedTo":"626ab2f3743543122cc11bc082a0603d2b5b2b1b" } ``` | | | | 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | "value":"abc", "propagate":true, "raw":false, "appliedTo":"626ab2f3743543122cc11bc082a0603d2b5b2b1b" } ``` The `appliedTo` property is the hash of the check-in to which the tag was applied. This is the "resolved" version of the check-in name provided by the client. <a id="cancel"></a> # Cancel Tag **Status:** implemented 20111006 |
︙ | ︙ | |||
97 98 99 100 101 102 103 | - `name=string` The tag name to search for. Can optionally be the 3rd path element. - `limit=int` (defalt=0) Limits the number of results (0=no limit). Since they are ordered from oldest to newest, the newest N results will be returned. - `type=string` (default=`*`) Searches only for the given type of artifact (using fossil's conventional type naming: ci, e, t, w. | | > | | 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | - `name=string` The tag name to search for. Can optionally be the 3rd path element. - `limit=int` (defalt=0) Limits the number of results (0=no limit). Since they are ordered from oldest to newest, the newest N results will be returned. - `type=string` (default=`*`) Searches only for the given type of artifact (using fossil's conventional type naming: ci, e, t, w. - `raw=bool` (=false) If enabled, the response is an array of hashes of the requested artifact type; otherwise, it is an array of higher-level objects. If this is true, the "name" property is interpretted as-is. If it is false, the name is automatically prepended with "sym-" (meaning a branch). (FIXME: the current semantics are confusing and hard to remember. Re-do them.) **Response payload example, in RAW mode: (expect this format to change at some point!)** |
︙ | ︙ |
Changes to www/json-api/api-timeline.md.
︙ | ︙ | |||
77 78 79 80 81 82 83 | "uuid":"be700e84336941ef1bcd08d676310b75b9070f43", "timestamp":1317094090, "comment":"Added /json/timeline/ci showFiles to ajax test page.", "user":"stephan", "isLeaf":true, "bgColor":null, /* not quite sure why this is null? */ "type":"ci", | | | 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | "uuid":"be700e84336941ef1bcd08d676310b75b9070f43", "timestamp":1317094090, "comment":"Added /json/timeline/ci showFiles to ajax test page.", "user":"stephan", "isLeaf":true, "bgColor":null, /* not quite sure why this is null? */ "type":"ci", "parents": ["primary parent hash", "...other parent hashes"], "tags":["json"], "files":[{ "name":"ajax/index.html", "uuid":"9f00773a94cea6191dc3289aa24c0811b6d0d8fe", "parent":"50e337c33c27529e08a7037a8679fb84b976ad0b", "state":"modified" }] |
︙ | ︙ | |||
99 100 101 102 103 104 105 | commit. The first entry in the array is the "primary parent" - the one which was not involved in a merge with the child. **Request options:** - `files=bool` toggles the addition of a "files" array property which contains objects describing the files changed by the commit, | | | | 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | commit. The first entry in the array is the "primary parent" - the one which was not involved in a merge with the child. **Request options:** - `files=bool` toggles the addition of a "files" array property which contains objects describing the files changed by the commit, including their hash, previous hash, and state change type (modified, added, or removed). ([“uuid” here means hash][uvh])\ CLI mode: `--show-files|-f` - `tag|branch=string` selects only entries with the given tag or "close to" the given branch. Only one of these may be specified and if both are specified, which one takes precedence is unspecified. If the given tag/branch does not exist, an error response is generated. The difference between the two is subtle - tag filters only on the given tag (analog to the HTML interface's "r" option) whereas branch can |
︙ | ︙ | |||
170 171 172 173 174 175 176 | "comment":"Ticket [b64435dba9] <i>How to...</i>", "briefComment":"Ticket [b64435dba9]: 2 changes", "ticketUuid":"b64435dba9cceb709bd54fbc5883884d73f93491" },...] } ``` | | | | | | 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 | "comment":"Ticket [b64435dba9] <i>How to...</i>", "briefComment":"Ticket [b64435dba9]: 2 changes", "ticketUuid":"b64435dba9cceb709bd54fbc5883884d73f93491" },...] } ``` **Notice that there are two [hashes][uvh] for tickets** - `uuid` is the change hash and `ticketUuid` is the actual ticket’s hash. This is an unfortunate discrepancy vis-a-vis the other timeline entries, which only have one hash. We may want to swap `uuid` to mean the ticket hash and change `uuid` to `commitHash`. <a id="wiki"></a> # Wiki Timeline **Status:** implemented 201109xx **Required privileges:** "j" or "o" |
︙ | ︙ | |||
204 205 206 207 208 209 210 | "user":"stephan", "eventType":"w" },...] } ``` The `uuid` of each entry can be passed to `/json/artifact` or | | > | 204 205 206 207 208 209 210 211 212 213 214 | "user":"stephan", "eventType":"w" },...] } ``` The `uuid` of each entry can be passed to `/json/artifact` or `/json/wiki/get?uuid=...` to fetch the raw page and the hash of the parent version. [uvh]: ../hashes.md#uvh |
Changes to www/json-api/conventions.md.
︙ | ︙ | |||
582 583 584 585 586 587 588 | **`FOSSIL-3000`: Usage Error Category** - `FOSSIL-3001`: Invalid argument/parameter type(s) or value(s) in request - `FOSSIL-3002`: Required argument(s)/parameter(s) missing from request - `FOSSIL-3003`: Requested resource identifier is ambiguous (e.g. a | | > | 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 | **`FOSSIL-3000`: Usage Error Category** - `FOSSIL-3001`: Invalid argument/parameter type(s) or value(s) in request - `FOSSIL-3002`: Required argument(s)/parameter(s) missing from request - `FOSSIL-3003`: Requested resource identifier is ambiguous (e.g. a shortened hash that matches multiple artifacts, an abbreviated date that matches multiple commits, etc.) - `FOSSIL-3004`: Unresolved resource identifier. A branch/tag/uuid provided by client code could not be resolved. This is a special case of #3006. - `FOSSIL-3005`: Resource already exists and overwriting/replacing is not allowed. e.g. trying to create a wiki page or user which already exists. FIXME? Consolidate this and resource-not-found into a separate category for dumb-down purposes? |
︙ | ︙ |